222 18 90MB
English Pages [1868] Year 2021
Advances in Intelligent Systems and Computing 1303
Mohammed Atiquzzaman Neil Yen Zheng Xu Editors
Big Data Analytics for Cyber-Physical System in Smart City BDCPS 2020, 28–29 December 2020, Shanghai, China
Advances in Intelligent Systems and Computing Volume 1303
Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Nikhil R. Pal, Indian Statistical Institute, Kolkata, India Rafael Bello Perez, Faculty of Mathematics, Physics and Computing, Universidad Central de Las Villas, Santa Clara, Cuba Emilio S. Corchado, University of Salamanca, Salamanca, Spain Hani Hagras, School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK László T. Kóczy, Department of Automation, Széchenyi István University, Gyor, Hungary Vladik Kreinovich, Department of Computer Science, University of Texas at El Paso, El Paso, TX, USA Chin-Teng Lin, Department of Electrical Engineering, National Chiao Tung University, Hsinchu, Taiwan Jie Lu, Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, Australia Patricia Melin, Graduate Program of Computer Science, Tijuana Institute of Technology, Tijuana, Mexico Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro, Rio de Janeiro, Brazil Ngoc Thanh Nguyen , Faculty of Computer Science and Management, Wrocław University of Technology, Wrocław, Poland Jun Wang, Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong
The series “Advances in Intelligent Systems and Computing” contains publications on theory, applications, and design methods of Intelligent Systems and Intelligent Computing. Virtually all disciplines such as engineering, natural sciences, computer and information science, ICT, economics, business, e-commerce, environment, healthcare, life science are covered. The list of topics spans all the areas of modern intelligent systems and computing such as: computational intelligence, soft computing including neural networks, fuzzy systems, evolutionary computing and the fusion of these paradigms, social intelligence, ambient intelligence, computational neuroscience, artificial life, virtual worlds and society, cognitive science and systems, Perception and Vision, DNA and immune based systems, self-organizing and adaptive systems, e-Learning and teaching, human-centered and human-centric computing, recommender systems, intelligent control, robotics and mechatronics including human-machine teaming, knowledge-based paradigms, learning paradigms, machine ethics, intelligent data analysis, knowledge management, intelligent agents, intelligent decision making and support, intelligent network security, trust management, interactive entertainment, Web intelligence and multimedia. The publications within “Advances in Intelligent Systems and Computing” are primarily proceedings of important conferences, symposia and congresses. They cover significant recent developments in the field, both of a foundational and applicable character. An important characteristic feature of the series is the short publication time and world-wide distribution. This permits a rapid and broad dissemination of research results. Indexed by SCOPUS, DBLP, EI Compendex, INSPEC, WTI Frankfurt eG, zbMATH, Japanese Science and Technology Agency (JST), SCImago. All books published in the series are submitted for consideration in Web of Science.
More information about this series at http://www.springer.com/series/11156
Mohammed Atiquzzaman Neil Yen Zheng Xu
•
•
Editors
Big Data Analytics for Cyber-Physical System in Smart City BDCPS 2020, 28–29 December 2020, Shanghai, China
123
Editors Mohammed Atiquzzaman School of Computer Science University of Oklahoma Norman, OK, USA
Neil Yen University of Aizu Fukushima, Japan
Zheng Xu Shanghai University of Medicine and Health Sciences Shanghai, China
ISSN 2194-5357 ISSN 2194-5365 (electronic) Advances in Intelligent Systems and Computing ISBN 978-981-33-4573-7 ISBN 978-981-33-4572-0 (eBook) https://doi.org/10.1007/978-981-33-4572-0 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore
Foreword
With the rapid development of big data and current popular information technology, the problems include how to efficiently use systems to generate all the different kinds of new network intelligence and how to dynamically collect urban information. In this context, Internet-of-Things and powerful computers can simulate urban operations while operating with reasonable safety regulations. However, achieving sustainable development for a new urban generation currently requires major break-throughs to solve a series of practical problems facing cities. A smart city involves wide use of information technology for multidimensional aggregation. The development of smart cities is a new concept. Using Internet of Things technology on the Internet, networking, and other advanced technology, all types of cities will use intelligent sensor placement to create object-linked information integration. Then, using intelligent analysis to integrate the collected information along with the Internet and other networking, the system can provide analyses that meet the demand for intelligent communications and decision support. This concept represents the way smart cities will think. Cyber-Physical System (CPS) as a multidimensional and complex system is a comprehensive calculation, network, and physical environment. Through the combination of computing technology, communication technology, and control technology, the close integration of the information world and the physical world is realized. IoT is not only closely related to people’s life and social development, but also has a wide application in military affairs, including aerospace, military reconnaissance, intelligence grid system, intelligent transportation, intelligent medical, environmental monitoring, and industrial control. Intelligent medical system as a typical application of IoT will be used as a node of medical equipment to provide real-time, safe, and reliable medical services for people in wired or wireless way. In the intelligent transportation system, road, bridge, intersection, traffic signal, and other key information will be monitored in real time. The vast amount of information is analyzed, released, and calculated by the system, so that the road vehicles can share road information in real time. Personnel of road management can observe and monitor the real-time situation of the key sections in the system, and even release the information to guide the vehicle so as to improve the v
vi
Foreword
existing urban traffic conditions. The Internet of Things, which has been widely used in the industry, is a simple application of IoT. It can realize the function of object identification, positioning, and monitoring through the access to the network. BDCPS 2020 which is held at December 28, 2020, Shanghai, China, is dedicated to address the challenges in the areas of CPS, thereby presenting a consolidated view to the interested researchers in the related fields. The conference looks for significant contributions on CPS in theoretical and practical aspects. Each paper was reviewed by at least two independent experts. The conference would not have been a reality without the contributions of the authors. We sincerely thank all the authors for their valuable contributions. We would like to express our appreciation to all members of the Program Committee for their valuable efforts in the review process that helped us to guarantee the highest quality of the selected papers for the conference. We would like to express our thanks to our distinguished keynote speakers, Professor Mohammed Atiquzzaman, University of Oklahoma, USA. We would also like to acknowledge the General Chairs, Publication Chairs, Organizing Chairs, Program Committee Members, and all volunteers. Our special thanks are due also to the editors of Springer book series “Advances in Intelligent Systems and Computing”. Our special thanks are due also to the editors of Springer book series “Advances in Intelligent Systems and Computing”, NS. Pandian, Dr. Ramesh Nath Premnath, and Gowrishankar Ayyasamy, for their assistance throughout the publication process.
Organization
General Chairs Shaorong Sun
University of Shanghai for Science and Technology, China
Program Committee Chairs Mohammed Atiquzzaman Zheng Xu Neil Yen
University of Oklahoma, USA Shanghai University, China University of Aizu, Japan
Publication Chairs Deepak Kumar Jain Ranran Liu Xinzhi Wang
Chongqing University of Posts and Telecommunications, China The University of Manchester Shanghai University, China
Publicity Chairs Junyu Xuan Vijayan Sugumaran Yu-Wei Chan
University of Technology Sydney, Australia Oakland University, USA Providence University, Taiwan, China
Local Organizing Chairs Jinghua Zhao Yan Sun
University of Shanghai for Science and Technology, China Shanghai University, China
vii
viii
Organization
Program Committee Members William Bradley Glisson George Grispos Abdullah Azfar Aniello Castiglione Wei Wang Neil Yen Meng Yu Shunxiang Zhang Guangli Zhu Tao Liao Xiaobo Yin Xiangfeng Luo Xiao Wei Huan Du Zhiguo Yan Rick Church Tom Cova Susan Cutter Zhiming Ding Yong Ge T. V. Geetha Danhuai Guo Jianping Fang Jianhui Li Yi Liu Kuien Liu Feng Lu
Ricardo J. Soares Magalhaes D. Manjula Alan Murray S. Murugan Yasuhide Okuyama S. Padmavathi Latha Parameswaran S. Suresh Wei Xu Chaowei Phil Yang Enwu Yin
University of South Alabama, USA University of Limerick, Ireland KPMG Sydney, Australia Università di Salerno, Italy The University of Texas at San Antonio, USA University of Aizu, Japan The University of Texas at San Antonio, USA Anhui Univ. of Sci. & Tech., China Anhui Univ. of Sci. & Tech., China Anhui Univ. of Sci. & Tech., China Anhui Univ. of Sci. & Tech., China Shanghai Univ., China Shanghai Univ., China Shanghai Univ., China Fudan University, China UC Santa Barbara, USA University of Utah, USA University of South Carolina, USA Beijing University of Technology, China University of North Carolina at Charlotte, USA Anna University, India Computer Network Information Center, Chinese Academy of Sciences, China University of North Carolina at Charlotte, USA Computer Network Information Center, Chinese Academy of Sciences, China Tsinghua University, China Pivotal Inc, USA Institute of Geographic Science and Natural Resources Research, Chinese Academy of Sciences, China University of Queensland, Australia Anna University, India Drexel University, USA Sathyabama Institute of Science and Technology, India University of Kitakyushu, Japan Amrita University, India Amrita University, India SRM University, India Renmin University of China George Mason University, USA China CDC, USA
Organization
Hengshu Zhu Morshed Chowdhury Min Hu Gang Luo Juan Chen Qigang Liu
ix
Baidu Inc., China Deakin University, Australia Shanghai University, China Shanghai University, China Shanghai University, China Shanghai University, China
2020 International Conference on Big Data Analytics for Cyber-Physical System in Smart City (BDCPS 2020)
Conference Program Dec 28, 2020, Shanghai, China Due to the COVID-19 outbreak problem, BDCPS2020 conference will be held online by Tencent Meeting (https://meeting.tencent.com/).
Greeting Message With the rapid development of big data and current popular information technology, the problems include how to efficiently use systems to generate all the different kinds of new network intelligence and how to dynamically collect urban information. In this context, Internet-of-Things and powerful computers can simulate urban operations while operating with reasonable safety regulations. However, achieving sustainable development for a new urban generation currently requires major break-throughs to solve a series of practical problems facing cities. A smart city involves wide use of information technology for multidimensional aggregation. The development of smart cities is a new concept. Using Internet of Things technology on the Internet, networking, and other advanced technology, all types of cities will use intelligent sensor placement to create object-linked information integration. Then, using intelligent analysis to integrate the collected information along with the Internet and other networking, the system can provide analyses that meet the demand for intelligent communications and decision support. This concept represents the way smart cities will think. Cyber-Physical System (CPS) as a multidimensional and complex system is a comprehensive calculation, network, and physical environment. Through the combination of computing technology, communication technology, and control technology, the close integration of the information world and the physical world is realized. IoT is not only closely related to people’s life and social development, but also has a wide application in military affairs, including aerospace, military
xi
xii
2020 International Conference on Big Data Analytics
reconnaissance, intelligence grid system, intelligent transportation, intelligent medical, environmental monitoring, and industrial control. Intelligent medical system as a typical application of IoT will be used as a node of medical equipment to provide real-time, safe, and reliable medical services for people in wired or wireless way. In the intelligent transportation system, road, bridge, intersection, traffic signal, and other key information will be monitored in real time. The vast amount of information is analyzed, released, and calculated by the system, so that the road vehicles can share road information in real time. Personnel of road management can observe and monitor the real-time situation of the key sections in the system, and even release the information to guide the vehicle so as to improve the existing urban traffic conditions. The Internet of Things, which has been widely used in the industry, is a simple application of IoT. It can realize the function of object identification, positioning, and monitoring through the access to the network. BDCPS 2020 which is held at December 28, 2020, Shanghai, China, is dedicated to address the challenges in the areas of CPS, thereby presenting a consolidated view to the interested researchers in the related fields. The conference looks for significant contributions on CPS in theoretical and practical aspects. Each paper was reviewed by at least two independent experts. The conference would not have been a reality without the contributions of the authors. We sincerely thank all the authors for their valuable contributions. We would like to express our appreciation to all members of the Program Committee for their valuable efforts in the review process that helped us to guarantee the highest quality of the selected papers for the conference. We would like to express our thanks to our distinguished keynote speakers, Professor Mohammed Atiquzzaman, University of Oklahoma, USA. We would also like to acknowledge the General Chairs, Publication Chairs, Organizing Chairs, Program Committee Members, and all volunteers. Our special thanks are due also to the editors of Springer book series “Advances in Intelligent Systems and Computing”. Our special thanks are due also to the editors of Springer book series “Advances in Intelligent Systems and Computing”, NS. Pandian, Dr. Ramesh Nath Premnath, and Gowrishankar Ayyasamy, for their assistance throughout the publication process.
Conference Program
Monday, Dec. 28, 2020, Tencent Meeting 9:50–10:00 Opening ceremony 10:00–10:40 Keynote: Mohammed Atiquzzaman 10:40–11:00 Best Paper Awards 14:00–18:00 Session 1 Session 2 Session 3 Session 4 Session 5 Session 6 Session 7 Short papers poster
Shaorong Sun Shaorong Sun Junyu Xuan Yan Sun Ranran Liu Xinzhi Wang Guangli Zhu Zhiguo Yan Huan Du
xiii
BDCPS 2020 Keynotes
Mobility Management and Security Issues for Networks in Motion Mohammed Atiquzzaman School of Computer Science, University of Oklahoma, USA
Abstract. Previous work on mobility management in data networks has mainly dealt with solutions regarding mobility of individual hosts. Various network layer and transport layer solutions have been developed. However, recently there has been strong interest in finding solutions for networks in motion, such as networks in an aircraft, train, or ship. As they move, rather than handing off individual hosts on such a network, it is more efficient to handover the networks between access points. This results in the handoff being transparent to the hosts and less control traffic in the resource-challenged wireless networks. The talk provides an overview of the network-layer-based solution being developed by the Internet Engineering Task Force and compares with the end-to-end-based solution (SINEMO) being developed at University of Oklahoma in conjunction with the National Aeronautics and Space Administration for networks in motion. Security issues and solutions for mobility management schemes will also be described. The application of networks in motion will be illustrated for both terrestrial and space environments.
xvii
xviii
Mobility Management and Security Issues for Networks in Motion
Mohammed Atiquzzaman (Senior Member, IEEE) obtained his M.S. and Ph.D. in Electrical Engineering and Electronics from the University of Manchester (UK) in 1984 and 1987, respectively. He joined as an assistant professor in 1987 and was later promoted to senior lecturer and associate professor in 1995 and 1997, respectively. Since 2003, he has been a professor in the School of Computer Science at the University of Oklahoma. Dr. Atiquzzaman is the editor-in-chief of Journal of Networks and Computer Applications, co-editor-in-chief of Computer Communications journal and serves on the editorial boards of IEEE Communications Magazine, International Journal on Wireless and Optical Communications, Real-Time Imaging journal, Journal of Communication Systems, Communication Networks and Distributed Systems and Journal of Sensor Networks. He co-chaired the IEEE High-Performance Switching and Routing Symposium (2003) and the SPIE Quality of Service over Next Generation Data Networks conferences (2001, 2002, 2003). He was the panel co-chair of INFOCOM’05 and has been in the program committee of many conferences such as INFOCOM, Globecom, ICCCN, Local Computer Networks, and serves on the review panels at the National Science Foundation. He received the NASA Group Achievement Award for “outstanding work to further NASA Glenn Research Center’s effort in the area of Advanced Communications/Air Traffic Management’s Fiber Optic Signal Distribution for Aeronautical Communications” project. He is the co-author of the book “Performance of TCP/IP over ATM networks” and has over 150 refereed publications, most of which can be accessed at www.cs.ou.edu/*atiq. His current research interests are in areas of transport protocols, wireless and mobile networks, ad hoc networks, satellite networks, Quality of Service, and optical communications. His research has been funded by National Science Foundation (NSF), National Aeronautics and Space Administration (NASA), and U.S. Air Force.
Oral Presentation Instruction
1. Timing: a maximum of 10 minutes total, including speaking time and discussion. Please make sure your presentation is well timed. Please keep in mind that the program is full and that the speaker after you would like their allocated time available to them. 2. You can use CD or USB flash drive (memory stick), make sure you scanned viruses in your own computer. Each speaker is required to meet her/his session chair in the corresponding session rooms 10 minutes before the session starts and copy the slide file (PPT or PDF) to the computer. 3. It is suggested that you email a copy of your presentation to your personal inbox as a backup. If for some reason the files can’t be accessed from your flash drive, you will be able to download them to the computer from your email. 4. Please note that each session room will be equipped with a LCD projector, screen, point device, microphone, and a laptop with general presentation software such as Microsoft PowerPoint and Adobe Reader. Please make sure that your files are compatible and readable with our operation system by using commonly used fronts and symbols. If you plan to use your own computer, please try the connection and make sure it works before your presentation. 5. Movies: If your PowerPoint files contain movies, please make sure that they are well formatted and connected to the main files.
Short Paper Presentation Instruction 1. Maximum poster size is 0.8 meter wide by 1 meter high. 2. Posters are required to be condensed and attractive. The characters should be large enough so that they are visible from 1 meter apart. 3. Please note that during your short paper session, the author should stay by your short paper to explain and discuss your paper with visiting delegates.
Registration Since we use online meeting way, no registration fee is needed.
xix
Contents
Intelligent Garbage Classification System Based on Open MV . . . . . . . . Hanxu Ma, Zhongfu Liu, and Yujia Zhai
1
Deformation Monitoring Method of Railway Buildings Based on 3D Laser Scanning Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yang Yu and Fengqin Zhang
9
Research on Real-Time Compression and Transmission Method of Motion Video Data Under Internet of Things . . . . . . . . . . . . . . . . . . Liang Hua, Jiayu Wang, and Xiao Hu
17
Design of Digital-Analog Control Algorithm for Flash Smelting Metallurgy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Feng Guo, Qin Mei, and Da Li
25
Simulation and Prediction of 3E System in Shandong Province Based on System Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yanan Wang, Xinyu Liu, Wene Chang, and Miaojing Ying
31
Application of UAV 3D HD Photographic Model in High Slope (Highway) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kaiqiang Zhang, Zhiguang Qin, Guocai Zhang, and Ying Sun
38
Design of Foot Cam Vibration Damping System for Forest Walking Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jing Yin
45
Semantic Segmentation of Open Pit Mining Area Based on Remote Sensing Shallow Features and Deep Learning . . . . . . . . . . . . . . . . . . . . Hongbin Xie, Yongzhuo Pan, Jinhua Luan, Xue Yang, and Yawen Xi
52
Energy Storage Technology Development Under the Demand-Side Response: Taking the Charging Pile Energy Storage System as a Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lan Liu, Molin Huo, Lei Guo, Zhe Zhang, and Yanbo Liu
60
xxi
xxii
Contents
Oracle’s Application in Finance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lin Bai
65
The Co-construction and Sharing Mechanism of University Library Resources Based on the Hyper-network Perspective . . . . . . . . . . . . . . . Xinyu Wu
71
The Development of Yoga Industry in China Under the Background of Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yu Fan
78
Inheritance and Development of Traditional Patterns in Computer Aided Design Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yue Wang
85
Sybil Attack Detection Algorithm for Internet of Vehicles Security . . . . Rongxia Wang The Practical Application of Artificial Intelligence Technology in Electronic Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Aixia Hou
92
99
Application Research of 3D Reconstruction of Auxiliary Medical Image Based on Computer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Chao Wang and Xuejiang Ran Service Quality Research on Media Gateway . . . . . . . . . . . . . . . . . . . . . 113 Xiaozhu Wang and Xiaoxue Lu A Review of Research on Module Division Methods Based on Different Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Yanqiu Xiao, Qiongpei Xia, Guangzhen Cui, Xianchao Yang, and Zhen Zhang Application of 3D Animation Technology in the Making of MOOC . . . 127 Chulei Zhang Foreseeing the Subversive Influence of Intelligent Simulation Technology for Battle Example Teaching . . . . . . . . . . . . . . . . . . . . . . . . 134 Nan Wang and Miao Shen Construction of Smart Campus Under the Background of Big Data . . . 142 Kui Su, Shi Yan, and Xiao-li Wang Information Platform for Classroom Teaching Quality Evaluation and Monitoring Based on Artificial Intelligence Technology . . . . . . . . . 149 Guoyong Liu University Professional Talents Training on Student Employment and School Quality in the Big Data Era . . . . . . . . . . . . . . . . . . . . . . . . . 156 Zhaojun Pang
Contents
xxiii
The Security Early Warning System of College Students’ Network Ideology in the Big Data Era . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Wei Han Investigation and Research on the Potential of Resident User Demand Response Based on Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Xiangxiang Liu, Jie Lu, Qin Yan, Zhifu Fan, and Zhiqiang Hu Design and Implementation of Intelligent Control Program for Six Axis Joint Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Shuo Ye and Lingzhen Sun Predictive Modeling of Academic Performance of Online Learners Based on Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Zhi Cheng ‘‘IoT Plus” and Intelligent Sports System Under the Background of Artificial Intelligence – Take Swimming as an Example . . . . . . . . . . 195 Shuai Liu Application Research of Artificial Intelligence in Swimming . . . . . . . . . 202 Shuai Liu Image Denoising by Wavelet Transform Based on New Threshold . . . . 208 Hua Zhu and Xiaomei Wang Effects of Online Product Review Characteristics on Information Adoption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Lianzhuang Qu, Yao Zhang, and Fengjun Sun The Empirical Analysis on Role of Smart City Development in Promoting Social and Economic Growth . . . . . . . . . . . . . . . . . . . . . . 221 Wangsong Xie In-Situ Merge Sort Using Hand-Shaking Algorithm . . . . . . . . . . . . . . . . 228 Jian Zhang and Rui Jin An Environmental Data Monitoring Technology Based on Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Yan Wang and Ke Song A Path Planning Method for Environmental Robot Based on Intelligent Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Ke Song Design of Fractal Art Design Image Based on One-Dimensional MFDMA Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Chunhu Shi
xxiv
Contents
Application of Data Mining Technology in Geological Exploration Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 Anping Zhang A Fast Filtering Method of Invalid Information in XML File . . . . . . . . 259 Xijun Lin, Shang Gao, Zheheng Liang, Liangliang Tang, Yanwei Shang, Zhipeng Feng, and Gongfeng Zhu Evaluation System of Market Power Alert Level Based on SCP Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Jinfeng Wang and Shuangmei Guo Application Research of Biochemistry in Life Science Based on Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Shuna Ge and Yunrong Zhang Non Time Domain Fault Detection Method for Distribution Network . . . Jianwei Cao, Ming Tang, Zhihua Huang, Ying Liu, Ying Wang, Tao Huang, and Yanfang Zhou
279
Fault Transient Signal Analysis of UHV Transmission Line Based on Wavelet Transform and Prony Algorithm . . . . . . . . . . . . . . . . . . . . . 285 Mingjiu Pan, Chenlin Gu, Zhifang Yu, Jun Shan, Bo Liu, Hanqing Wu, and Di Zheng Korean-Chinese Bilingual Sentence Alignment Method Based on Character Length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 Qi Wang, Yahui Zhao, and Rongyi Cui Workpiece Quality Prediction Research Based on Multi-source Heterogeneous Industrial Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Huiteng Cao Application of the Infrared Sensitive Technology in Hot Straightening Machines Based on Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Yacan Sun Parametric Modeling and Finite Element Analysis on the Rocker Arm of Radial Drilling Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 Chengmei Yan Application and Research of Artificial Intelligence in Digital Library . . . Jie Kong
318
Application of Photoshop Graphics and Image Technology in VR Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 Gang Liu Wisdom Course Teaching Under the Background of the Big Data Era . . . 334 Yuying Wang
Contents
xxv
Computer Simulation Analysis of Mechanical Characteristics of Lifting Pipe Considering Internal and External Flow . . . . . . . . . . . . 340 Qinghui Song, Jilei Xu, Haiyan Jiang, Qingjun Song, and Linjing Xiao Innovation of Tourism Education Courses in Higher Vocational Colleges Under the Background of Big Data . . . . . . . . . . . . . . . . . . . . . 347 Chunling Tang A Study on the Application and Progress of Computer Technology in TCM Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 Xinyu Duan and Yongmei Peng A Click-Through Rate Prediction Algorithm Based on Real-Time Advertising Data Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 Chen Gong Employment Service System Based on Hybrid Recommendation Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Zhenqi Dong, Chunxia Leng, and Hong Zheng The Application of Teaching Apps in Bilingual Accounting Courses . . . 376 Puying Li The Application Practice of BIM Technology in Prefabricated Building Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Lili Peng, Yundi Peng, and Juan Pan Application of Big Data Analysis in Choosing Optimal Cross-plots Related to Dibenzofuran Compounds . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Bingkun Meng, Lu Yang, and Chunmiao Ma Application of Modern Computer Information Technology in “Uyghur Language” Teaching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Parezhati Maisuti Research and Design of Mobile Assistant Class Management System . . . . Juan He and Fuchen Leng
402
A Brief Analysis of Wearable Electronic Medical Devices . . . . . . . . . . . 409 Yuxin Du Review and Prospect of Text Analysis Based on Deep Learning and Its Application in Macroeconomic Forecasting . . . . . . . . . . . . . . . . 416 Yao Chen The Research on the Construction of the “Online + Offline” Hybrid Teaching Mode of College English Under “Internet +” Background . . . 424 Ruili Chen
xxvi
Contents
The Pronouns of the Odyssey and the Statistical Terms of Shan Hai Jing in Computer Search Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 Dinghui Wang, Jun Luo, and Zhidan Zhou The Data Limitations of Artificial Intelligence Algorithms and the Political Ethics Problems Caused by it . . . . . . . . . . . . . . . . . . . 438 Kefei Zhang The Development Strategy Research of Higher Education Management from the Perspective of “Internet + ” . . . . . . . . . . . . . . . . 444 Ziyu Zhou Study on Data Transmission of Low Voltage Electrical Equipment Based on MQTT Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451 Xueyu Han Residual Waste Quality Detection Method Based on Gaussian-YOLOv3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 Zhigang Zhang, Xiang Zhao, Ou Zhang, Guangjie Fu, Yu Xie, and Caixi Liu An Empirical Study of English Teaching Model in Higher Vocational Colleges Based on Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470 Wenbo Zhao Design of Power System of Five-Axis CNC Machine Tool . . . . . . . . . . . 476 Fuman Liu Design of Electrical Control System for Portable Automatic Page Turner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482 Xiurong Zhu and Xinyue Wang Development Trend and Content Status of Physical Education Undergraduate in Normal Universities Based on Big Data . . . . . . . . . . . 488 Hui Du, Ji Zhu, and Ke Sun Image Enhancement of Face Recognition Based on GAN . . . . . . . . . . . 494 Zhiliang Zhang and Tianfang Dong Design of Western Yugur Language Speech Database . . . . . . . . . . . . . . 501 Shiliang Lyu, Fan Bai, and Lin Na Design of Intelligent Three-Dimensional Bicycle Garage Based on Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509 Chunyu Mao, Chao Zhang, Yangang Wang, and Yingxin Yu Sealing Detection Technology of Cotton Ball of Edible Fungus Bag . . . . 517 Xiaodong Yang, Chunyu Mao, Zhuojuan Yang, and Hao Liu
Contents
xxvii
Reconstruction and Reproduction: The Construction of Historical Literature Model Under Data Intelligence . . . . . . . . . . . . . . . . . . . . . . . 525 Wenping Li Digital Turning of Logic and Practical Paradigm: The Establishment of Big Data Model in Anthropological Field . . . . . . . . . . . . . . . . . . . . . . 532 Zhuoma Sangjin and Liang Yan Simulation-Testing and Verification of the Economical Operation of Power Distribution Network Coordinated with Flexible Switch . . . . . 538 Zhenning Fan, Qiang Su, Xinmin Zhang, Changwei Zhao, and Ke Xu Research on the Development Strategy of Intra-city Clothing Distribution Based on O2O Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547 Tong Zhang, Na Wei, and Aizhen Li Construction of Budget Evaluation Index System for ApplicationOriented Undergraduate Universities Based on Artificial Intelligence . . . Dahua Wang and Guohua Song
555
Smart City Evaluation Index System: Based on AHP Method . . . . . . . . 563 Fang Du, Linghua Zhang, and Fei Du Feasibility Analysis of Electric Vehicle Promotion Based on Evolutionary Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570 Shiqi Huang The Intelligent Fault Diagnosis Method Based on Fuzzy Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578 Kun Han, Tongfei Shang, Jingwei Yang, and Yuan Yu The Demand Forecasting Method Based on Least Square Support Vector Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584 Jing Liu, Tongfei Shang, Jingwei Yang, and Jie Wu Modernity of Ancient Literature Based on Big Data . . . . . . . . . . . . . . . 590 Luchen Zhai Summary of Data Races Solution Algorithms for Multithreaded Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597 Chun Fang Implementation of Stomatological Hospital Information . . . . . . . . . . . . 602 Ying Li, Yibo Yang, Ziyi Yang, and Quanyi Lu Intelligent Classroom System Based on Internet of Things Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610 Xia Wu, Yue Yang, Xulei Yu, and Chonghao Zheng
xxviii
Contents
Coupling Model of Regional Economic System Design Based on Big Data Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617 Huan Jin International Intelligent Hospital Information System Based on MVC . . . Lifang Shen, Hexuan Liu, Wangqiang Tian, and Shuai Zhang
625
Heterogeneous Network Multi Ecological Big Data Fusion Method Based on Rotation Forest Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 632 Yun Liu and Yong Liu Custom Tibetan Buddhist Ceramics Used by Royalties in the Qing Dynasty Based on Han-Tibetan Cultural Evolutionary Algorithm . . . . . 640 Jie Xie The Analysis of the Cultural Changes of National Traditional Sports in the Structure Model of Achievement Motivation . . . . . . . . . . . . . . . . 648 Yuhua Zhang Computer Network Security and Preventive Measures Based on Big Data Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656 Bin Wang, Long Guo, and Peng Xu Construction and Practice of Power Grid Dispatching Intelligent Defense System Based on Multivariate Data Fusion and Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664 Xinlei Cai Analysis of Key Technologies for Artificial Intelligence in Regulation of Power Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671 Yanlin Cui, Xinglei Cai, Kai Dong, Xiangzhen He, and Zhenfan Yu Design of Online Monitoring System for the Status of Cable Based on Wireless Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677 Chaoqiang Hu, Yingmin Huang, and Cuishan Xu Analysis of Automatic Detection Technology for Abnormal Faults of Cable Nodes in Smart Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683 Qingyun Hu, Yingmin Huang, Cuishan Xu, Shengquan Li, and Hanfeng Zou Research on Development Status of Modern Wireless Communication Technology and Its Future Development Trend . . . . . . . . . . . . . . . . . . . 689 Xiangyu Liu and Lu Wang Feature Based Lunar Mare Volcanic Dome Recognition . . . . . . . . . . . . 694 Zhe Shen Research on Construction of Campus Smart Card System and Digital Campus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701 Xiyuan Yin
Contents
xxix
Measurement High-Quality Urbanization Development Based on Composite System—Take Western China as an Example . . . . . . . . . 707 Liping Sun and Jun Yang Research on Intelligent Monitoring System for Anti-stealing Electricity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714 Cong Tian, Chang Su, Chao Yang, and Yi Zheng Application of Text Proofreading System Based on Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 722 Jing Gao and Zhishuai Guo Application of Information Fusion in Fault Diagnosis of Electronic Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 728 Rongxia Duan, Baocai Xu, Jiaru Liu, and Xia Pu Blended Teaching Reform and Practice of Tax Law Based on TPACK Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735 Liang Xing Oil and Gas Detection and Recovery Methods in Oil and Gas Storage and Transportation Based on Artificial Intelligence . . . . . . . . . . . . . . . . 743 Jing Zhao, Li Li, and Zhiguo Wang Application of Improved BP Neural Network to the Modeling of Electric Gear Pump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751 Zhongqiang Liu, Li Zhang, Chunxiao Zhang, Xiangfei Kong, and Anan Shen A Method of Complex System Reliability Evaluation Based on Universal Generate Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 760 Jian Chen, Jianyin Zhao, Xiaoming Wang, and Yan Wang Knowledge Field Activity of Industry-University-Research Alliance on 5G Knowledge Flow and 5G Technology Innovation Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767 Peng Zhang and Jianming Zhou The Mode of Computer-Aided Translation Under the Background of Big Data Innovation and Development . . . . . . . . . . . . . . . . . . . . . . . . 774 Li Guo, Yuan Guo, and Zhenghua Xue Decision Tree Classification Algorithm in College PE Teachers’ Score Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 780 Jianhua Sun Improvement and Optimization of ZigBee Network Routing Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787 Xiaoqing Zhou, Jiangjing Gao, Xingxian Luo, and Jianqiong Xiao
xxx
Contents
Artificial Intelligence in the Field of Driverless Cars . . . . . . . . . . . . . . . 794 Lipeng Liu Object-Oriented Database and O/R Mapping Technology . . . . . . . . . . . 800 Wei Deng Application of a Classroom Check-In and Naming System . . . . . . . . . . 807 Wei Deng Analysis of Rural Inclusive Finance Development Path Based on “Internet +” Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813 Manli Weng Supply and Demand Congestion of Tourism Based on the Big Data Method-Take Guilin as Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821 Jingyun Qin and Shumeng Li The Mobilization Mechanism of Young Volunteers in Major Emergencies Based on Big Data Analysis Technology . . . . . . . . . . . . . . 830 Xu Wang Psychological Characteristics of Special Groups of College Students Based on Artificial Intelligence and Big Data Technology . . . . . . . . . . . 837 Xiujuan Jia Artificial Intelligence Technology Simulation University Three Whole Education Moral Education Model Research . . . . . . . . . . . . . . . . . . . . . 844 BaoLong Wang Architectural Design of a Campus Second-Hand Commodity Trading Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 851 Wei Huang, Zhuo Li, Yongjiang Wang, and You Tang Assumption of Missing Processing of Sensor Acquisition Data Based on Multiple Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 858 Zhuo Li, Yufan Liu, Helong Yu, and You Tang Factor Analysis on Influence Factors of Campus Express Service Satisfaction Degree Based on SPSS Statistical Analysis Software . . . . . . 864 Fangbo Hou and You Tang Financial Management Construction Based on Informatization . . . . . . . 871 Ruili Wang The Intercultural Communication Ability of Hanban Teachers Based on Artificial Intelligence and Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . 879 Dongping Chen Prediction and Compensation Algorithm of Variable Curved Surface in Multi-point Forming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886 Qingfang Zhang and Hongfen Wang
Contents
xxxi
A Study on the Application of Hierarchical Corresponding Technique to TCM Chinese-English Electronic Dictionary . . . . . . . . . . . . . . . . . . . 894 Meng Wang, Taoan Li, Zhimei Wang, and Yongyi Wen Cutting Device for Production of Spiral Submerged Arc Welded Pipe Based on PLC Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 901 Guiping Wang, Xirui Sun, Zhongbao Luo, and Tianchi Ye On the Construction of Baoji Bronze Culture Exchange Network . . . . . 907 Kanmai Zhi The Practical Teaching System of Computer Basic Course . . . . . . . . . . 914 Xiaoming Yang Exploring the Training Mode of Informationalized Compound Innovation and Entrepreneurship Talents . . . . . . . . . . . . . . . . . . . . . . . 920 Ling Lu Design of Remote Environmental Monitoring System . . . . . . . . . . . . . . 926 Jian Huang The Liquid Concentration Measuring Instrument Based on Capacitance Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 932 Jian Huang Analysis of the Realization of Economic Value by the KOL Marketing Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 938 Chaoying Song, Linhan Zhang, and Shan Lin The Path of the Development of Socialist Advanced Culture in Universities Based on AHP-Fuzzy Evaluation . . . . . . . . . . . . . . . . . . 946 Shan Lin, Chao Guo, and Chaoying Song Impact of the Internet on the Teaching Effect of Higher Education Based on Big Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 956 Boyu Zang Fast Fitting Method of Complex Network Based on Deep Learning . . . 963 Mei Li and Zexia Huang On the Basic Thinking of Financial Management Consultation in Enterprises Based on Data Analysis in the Information Age . . . . . . . 969 Linyu Xie Operation Mode and Development Strategy of Internet Finance . . . . . . 977 Xingping Zhu Internet Finance and Financing Innovation of Small and Micro Enterprises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 982 Qiong Wu
xxxii
Contents
Self-media User Information Sharing Behavior . . . . . . . . . . . . . . . . . . . 987 Xiu-li Jin Unsupervised Video Anomaly Detection Based on Sparse Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994 ZhenJiang Li, Wenbo Yang, Guangli Wu, and Liping Liu Practice Path and Security Mechanism of “Internet + Education” in the Era of Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1002 Juan Qian Intelligent Control Location Detection System Based on Machine Vision and Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1012 Jin Yao, Jing Feng, Yuzhou Liu, Licheng Chen, Rentang You, Jiaxing Sun, Xiaofei Zhang, Yongzhi Xiang, Xiaoyun Chen, and Jiajie Wu Industrial Electrical Automation Control System Based on Machine Vision and Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1019 Jin Yao, Jing Feng, Yuzhou Liu, Licheng Chen, Rentang You, Jiaxing Sun, Xiaofei Zhang, Yongzhi Xiang, Xiaoyun Chen, and Jiajie Wu Realization of Automatic Control System Based on Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1027 Jin Yao, Zhongping Wu, Jing Feng, Kechun Yan, Yuzhou Liu, Licheng Chen, Rentang You, Xiaofei Zhang, Yongzhi Xiang, and Xiaoyun Chen Critically Discuss Ways in Which Design and Technology Can Be Utilized to Improve the Climate Change Performance of UK Building in the Context of Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1035 Yonghao Zhu Analysis on Application of Big Data Technology in Audit Practice . . . . 1042 Wei Li Similarity Study of Hydrological Time Series Based on Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1049 Yang Yu and Dingsheng Wang Solution of TRIZ Conflict X-Resource Based on LT Table . . . . . . . . . . 1056 Xunlin Lu, Guozhong Cao, and Haoyang Tian Theoretical Research on Product Quality Evaluation Under the Background of Large Data Based on Gray System——Using Midea Refrigerator as a Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1064 Yuxiang Niu Multi-scale Tail Risk Transmission Mechanism of Chinese and Russian Stock Market Based on Spatiotemporal Kriging Model . . . . . . 1071 Chenglin Xiao, Weili Xia, and Jijiao Jiang
Contents
xxxiii
A Case Study of Transformative Learning of College English Teachers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1079 Wenbo Zhao Analysis of the Effect of Restoring 10% Vehicle Purchase Tax on the Sales Volume of Small-Displacement Vehicles Based on the Double Difference Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1084 Yugang Niu Big Data Based Analysis of the China’s Customer Service Industry in 2019 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1092 Yawei Jiang and Huali Cai Cross-Border E-Commerce and Cross-Border Logistics in Yunnan Province Under the Background of the One Belt and One Road Based on Big Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1098 Lei Zhang and ShuZheng Zhao Remote Wireless Monitoring and Control System of Heat Exchange Station Based on Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105 Wei Li and Jian Fang Map Initialization Technology Based on Conformal Geometric Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1113 Junchai Gao, Hao Sun, Jiansheng Zhang, and Lijuan Men Integrated Marketing Communications Strategy Based on Data Analysis for Stella McCartney . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1120 Chen Qu and Lifang Chen Economic Performance Analysis of Manufacturing Industry Upgrading in Guangdong Hong Kong Macao Bay Area Based on Smart City . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1127 Yiting Qiu Application of Software Testing Technology in Security and Protection of Power System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1132 Jijun Zeng, Chunsong Yang, Wuqiang Shen, and Jinbo Zhang Application of Artificial Intelligence-Based UAV Photogrammetry Technology in Electric Power Surveying and Mapping Engineering . . . 1138 Wuzhong Dong, Qiuquan Gong, and Kai Yuan Teaching Reform of Mechanical Drawing and Auto CAD Based on Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1145 Lihong Zhang Design of Intelligent Fault Location System for Distribution Lines Based on Multi-feature Combination . . . . . . . . . . . . . . . . . . . . . . . . . . . 1152 Meifang Cai
xxxiv
Contents
DC Distribution Network Topology and Fault Characteristics . . . . . . . . 1160 Wei Zhang and Tong Qin Evaluation Index System and Comprehensive Evaluation Method of Auxiliary Service in Regional Electricity Market . . . . . . . . . . . . . . . . 1166 TianYi Qu and Tong Qin Coordinated Scheduling Optimization of V2G Technology and Renewable Energy for Electric Vehicles . . . . . . . . . . . . . . . . . . . . . 1172 Xingyu Chen and Yuanchun Fan Electric Vehicles and Renewable Energy Based on Coordinated and Optimal Dispatch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1179 Tianyi Qu and Jiayin Guo Ship Monitoring System Based on Mobile Public Network . . . . . . . . . . 1185 Jun Liu Dynamic VR Display System of Digital Garment Display Design Based on 5G Virtual Reality Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1191 Huijuan Lai and Ming Lu The Development of Smart Tourism Under the Background of Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1199 Yongqiang Li and Heqing Zhang Construction of Big Data Processing Platform for Intelligent Agriculture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1206 Shangcong Li and Yijing Zhang Impact of COVID-19 Attention on Pharmaceutical Stock Prices Based on Internet Search Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213 Yuanting Xia and Wenxiu Hu Influencing Factors of Logistics Cost Based on Grey Correlativity Analysis in Transportation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1221 Wei Bai, Xiaoyu Pang, and Wei Zhang Data Analysis on the Relationship of Employees’ Stress and Satisfaction Level in a Power Corporation in the Context of the Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1228 Yuzhong Liu, Zhiqiang Lin, Zhixin Yang, Hualiang Li, and Yali Shen Review on Biologic Information Extraction Based on Computer Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235 Yuzhong Liu, Zhiqiang Lin, Zhixin Yang, Hualiang Li, and Yali Shen Design of City Image Representation and Communication Based on VR Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1242 Yan Cui and Yinhe Cui
Contents
xxxv
Image Approximate Copy Copyright Detection Technology and Algorithm for Network Propagation Under Big Data Condition . . . . . . 1247 Yan Cui and Yinhe Cui Food Safety Traceability Technology Based on Block Chain . . . . . . . . . 1253 Miao Hao, Heng Tao, Wei Huang, Chengmei Zhang, and Bing Yang Research on Data Acquisition and Transmission Based on Remote Monitoring System of New Energy Vehicles . . . . . . . . . . . . . . . . . . . . . . 1260 Yuefeng Lei and Xiufen Li Computer Network Security Based on GABP Neural Network Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1266 Haijun Huang An Optimization Method for Blockchain Electronic Transaction Queries Based on Indexing Technology . . . . . . . . . . . . . . . . . . . . . . . . . 1273 Liyong Wan B-Spline Curve Fitting with Normal Constrains in Computer Aided Geometric Designed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1282 Zizhi Lin and Yun Ding Application and Analysis of Three Common High-Performance Network Data Processing Frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . 1290 Gen Zhu and Wen-bin Kang New Experience of Interaction Between Virtual Reality Technology and Traditional Handicraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1297 Jiang Pu Image Highlight Elimination Method Based on the Combination of YCbCr Spatial Conversion and Pixel Filling . . . . . . . . . . . . . . . . . . . 1303 Jiawei He, Xinke Xu, Daodang Wang, Tiantai Guo, Wei Liu, Lu Liu, Jun Zhao, Ming Kong, Bo Zhang, and Lihua Lei Application of E-Sports Games in Sports Training . . . . . . . . . . . . . . . . 1310 Xin Li and Xiu Yu The Research and Governance of Ethical Disorder in Cyberspace . . . . 1315 Dongyang Chen and Hongyu Wang Big Data Helps the Healthy Development of Rural Electronic Commerce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1321 Dongyang Chen and Yihe Liu Design of Optical Measurement Simulation Training System in Shooting Range Based on DoDAF . . . . . . . . . . . . . . . . . . . . . . . . . . . 1327 Xianyu Ma, Tao Wang, Xin Guan, Lu Zhou, and Yan Wang
xxxvi
Contents
Equipment Data Integration Architecture Based on Data Middle Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1334 Qi Jia, Jian Chen, and Tie-ning Wang Analysis of PBL Teaching Design for Deep Learning . . . . . . . . . . . . . . . 1339 Yue Sun, Zhihong Li, and Yong Wei Oscillation of Half-linear Neutral Delay Differential Equations . . . . . . . 1345 Ping Cui Reconstruction Research of Ancient Chinese Machinery Based Virtual Reality Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1352 Hongjun Zhang and Kehui Deng A Query Optimization Method of Blockchain Electronic Transaction Based on Group Account . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1358 Liyong Wan Face Recognition Based on Multi-scale and Double-Layer MB-LBP Feature Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1365 Kui Lu, Yang Liu, and Jiesheng Wu Application and Challenge of Blockchain in Supply Chain Finance . . . . 1372 Tianyang Huang Preparation and Function of Intelligent Fire Protection Underwear Based on ECG Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1378 Feng He, Jinglong Zhang, and Yi Liu The Design and Implementation of Corpus Labeling System for Terminology Identification in Fishery Field . . . . . . . . . . . . . . . . . . . 1385 Yawei Li, Xin Jiang, Jusheng Liu, and Sijia Zhang Research on Data Ethics Based on Big Data Technology Business Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1391 Hongzhen Lin and Qian Xu Integrating Machine Translation with Human Translation in the Age of Artificial Intelligence: Challenges and Opportunities . . . . 1397 Kai Jiang and Xi Lu Error Bounds for Linear Complementarity Problems of S-SDD Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1406 Yan-yan Li, Ping Zhou, and Jian-xin Jiang PCA-Based DDoS Attack Detection of SDN Environments . . . . . . . . . . 1413 Li-quan Han and Yue Zhang Risk Assessment of Sea Navigation of Amphibious Vehicles Based on Bayesian Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1420 Jian-hua Luo, Chao Song, and Yi-zhuo Jia
Contents
xxxvii
The Analysis on the Application of Machine Learning Algorithms in Risk Rating of P2P Online Loan Platforms . . . . . . . . . . . . . . . . . . . . 1426 Wangsong Xie 3D Mesh Model Representation Based on Conformal Geometric Algebra and Its Similarity Assessment Application . . . . . . . . . . . . . . . . 1434 Yichen Qin Grid-Connection Control of Small Hydropower Stations Based on the Principle of Quasi-Contemporaneous Grid-Connected . . . . . . . . 1444 Lixin Yan, Jizhong Wang, and Kai Yang A Probe into the Feasibility and Impact of the Global Decentralized Digital Financial Market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1452 XinRan Wei An Announcer Shot Detection Algorithm for TV News Programs . . . . . 1457 Yuzao Tan Analysis on the Construction of English Language and Culture Teaching System Based on Artificial Intelligence . . . . . . . . . . . . . . . . . . 1464 Fenghua Tang On the Construction of Distance Education Resources and Platform Based on Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1470 Song Zhang and Chang Yu The Construction and Practice of Practice Base System Based on Cloud Computing Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1477 Pinzhu Jiang Design and Research of Artificial Intelligence Algorithm in Management Accounting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1483 Lu Zhang Application of Multimedia Education System Under Clustering Algorithm in Value Orientation of College Students . . . . . . . . . . . . . . . 1489 Wang Xinfu Micro Teaching Online System Based on Teaching Information . . . . . . 1494 Weiwei Zhang The Development and Application of Artificial Intelligence Based on Computer Network Technology in the Background of Big Data . . . . 1499 Yun Fan and Xiaofang Tan The Prevention and Analysis of College Students’ Psychological Crisis Based on Data Mining Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1505 Lili Fang
xxxviii
Contents
Data Characteristics Mining and Analysis Technology Under the Background of Massive Data Acquisition . . . . . . . . . . . . . . . . . . . . . 1510 Chen Ma, Jiaxin Lu, Xiaolong Cui, and Jiao Wang Novel Coronavirus Pneumonia Based on Cloud Computing to Explore a New Mode of Physical Education in Colleges and Universities . . . . . . 1517 Guanghui Li Study on the Design of the Municipal Road Drainage System Based on the Sponge City Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1523 Shufang Li, Zhenzhen Jiang, and Jian Zhang The Construction of Music Performance Teaching Resource Service Platform Based on Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . 1529 Yang Li The Effectiveness Platform of Classroom Teaching Based on Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1535 Fei Shen Evolution Logic and Operation Mechanism of an E-commerce Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1540 Haiyang Su and Weidong Liu Study on Physical Fitness Feasibility Scheme Based on Intelligent Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1546 ZhiGang Tan and QingWen Tan Design and Application of Web-Based English Courseware Synchronous Learning Support System for Teachers and Students . . . . 1552 Fenghua Tang A Study of Oral English Teaching Mode Based on Electronic Files in a Cloud Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1558 Dandan Wang Physical Education Teaching Evaluation Based on Stochastic Simulation Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1563 Gao-xuan Wang Development and Design of Intelligent Gymnasium System Based on K-Means Clustering Algorithm Under the Internet of Things . . . . . . 1568 Han Yin, Jian Xu, Zebin Luo, Yiwen Xu, Sisi He, and Tao Xiong Study on Pasteurization and Cooling Technology of Orange Juice Based on CFD Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1574 Meiling Zhang
Contents
xxxix
The Mode of Dynamic Block Transfer Teaching Resource Base of Distance Education Digital Virtual Machine Based on Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1580 Yanhong Zhang Computer Fractal Technology Based on MATLAB . . . . . . . . . . . . . . . . 1586 Weiming Zhao Construction Collaborative Management Method Based on BIM and Control Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1592 Qiang Zhou and Xiaowen Hu Style Design and Color Analysis of Cloud Shoulder . . . . . . . . . . . . . . . . 1598 Li Wang The Information Strategy of University Education and Teaching Management in the Era of Cloud Computing and Big Data . . . . . . . . . 1604 Jianhua Chen and Huili Dou Real Estate Investment Estimation Based on BP Neural Network . . . . . 1610 Yuhong Chen, Baojian Cui, and Xiaochun Sun An Analysis of the Construction of Teaching Evaluation Model Under the Framework of Web . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1616 Yazhuo Fu Design and Implementation of Automatic Evaluation System in College English Writing Teaching Based on ASP.Net . . . . . . . . . . . . 1622 Guo Jianliang Big Data Service of Financial Law Based on Cloud Computing . . . . . . . 1627 Yaqian Li The Transformation of Traditional Enterprises to the Accounting Industry Based on Cloud Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . 1633 Xuelin Liu Application of Artificial Intelligence Technology in Computer Network Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1639 Tan Xiaofang, Fan Yun, and Fu Fancheng Application and Research of Information Technology in Art Teaching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1646 Haiyin Wang The System Construction of Computer in the Transformation of Old Urban Areas in China in the Future . . . . . . . . . . . . . . . . . . . . . . . . . . . 1652 Chaodeng Yang
xl
Contents
Application of Cloud Computing Virtual Technology in Badminton Teaching in Distance Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1658 Feng Xin The Realization of the Apriori Algorithm in University Management . . . . 1666 Zhang Ruyong and Song Limei Study on the Export of BP Neural Network Model to China Based on Seasonal Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1672 Ding Qi Big Data Analysis of Tourism Information Under Intelligent Collaborative Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1679 Li Sheng and Weidong Liu Construction of Intelligent Management Platform for Scientific Research Instruments and Equipment Based on the Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1686 Dong An, Zhengping Gao, Xiaohui Yang, Yang Guo, and Yong Guan The Remote Automatic Value Transfer System of Intelligent Electrical Energy Meter Verification Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1691 Lifang Zhang, Jiang Zhang, Luwei Bai, Zhao Jin, and Qi Zhang Application of Weighted Fuzzy Clustering Algorithm in Urban Economics Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1698 Xi Wang The Basic Education of Tourism Specialty Based on Computer Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1703 ShuJing Xu The Evaluation of Tourism Resources Informatization Development Level Based on BP Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1709 Ming Xiang The Application of Decision Tree Algorithm in Data Resources of Environmental Art Design Specialty in Colleges and Universities . . . 1714 Jing Chen The Construction of an Intelligent Learning System Under the Background of Blockchain Technology – Taking “Data Structure” as an Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1719 LinLin Gong Sports Detection System Based on Cloud Computing . . . . . . . . . . . . . . . 1725 Sheng-li Jiao Analysis of the Combination and Application of Design Software in Computer Graphic Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1730 Rong Li
Contents
xli
The Application of Data Resources in Art Colleges and Universities Under the Big Data Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1735 Hong Liu The Linear Capture Method of Tennis Forehand Stroke Error Trajectory Based on the D-P Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 1740 You Sun Study on Fracture Behavior of Modified Polymer Materials by Digital Image Correlation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1745 Xiangjun Wang Hydraulic Driving System of Solar Collector Based on Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1751 Yulin Wang Design and Simulation of a Welding Wire Feeding Control System Based on Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1756 Zeyin Wang Color Transfer Algorithm of Interior Design Based on Topological Information Region Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1761 Quan Yuan System Construction Model of Legal Service Evaluation Platform Based on Bayesian Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1766 Yanhong Wu Application of Random Simulation Algorithm in Physical Education Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1772 Huang Hong The Research on Comprehensive Query Platform for Smart Cities Building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1776 Zhicao Xu Construction and Application of Public Security Visual Command and Dispatch System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1782 Jiameng Zhang Research on Dynamic Resource Scheduling Technology of Dispatching and Control Cloud Platform Based on Container . . . . . . 1787 Dong Liu, Yunhao Huang, Jiaqi Wang, Wenyue Xia, Dapeng Li, and Qiong Feng Application of Container Image Repository Technology in Automatic Operation and Maintenance of the Dispatching and Control Cloud . . . . 1794 Lei Tao, Yunhao Huang, Dong Liu, Xinxin Ma, Shuzhou Wu, and Can Cui
xlii
Contents
A User Group Classification Model Based on Sentiment Analysis Under Microblog Hot Topic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1801 Mengyao Zhang and Guangli Zhu University Education Resource Sharing Based on Blockchain and IPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1808 Nan Meng and Shunxiang Zhang Examining the Relationship Between Foreigners in China Food Delicacies Using Multiple Linear Regression Analysis (sklean) . . . . . . . . 1814 Ernest Asimeng, Shunxiang Zhang, Asare Esther, and Mengyao Li Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1825
Intelligent Garbage Classification System Based on Open MV Hanxu Ma, Zhongfu Liu(&), and Yujia Zhai College of Information and Communication Engineering, Dalian Minzu University, Dalian 116600, Liaoning, China [email protected] Abstract. In the 20th century, the United Kingdom first implemented waste sorting and recycling, placing sorting barrels on the streets and recycling available resources. This method has also become popular worldwide. With the proposal of “smart city”, people have higher and higher requirements on quality of life, and the intelligentization of sorting buckets has become an inevitable trend. In order to respond to the call for garbage classification, this article has designed a set of intelligent garbage classification system based on STM32F103 Single-chip microcomputer, image processing technology and sensor technology. This article gives a more detailed and systematic description of the software framework and hardware design of the entire project. The design mainly uses the SSD algorithm and the Open MV module to identify and classify the types of garbage. The system controls the rotation of the steering gear baffle and the barrel in the garbage bin through the STM32F103 single-chip microcomputer, thereby performing accurate garbage classification. The intelligent trash can designed in this paper can realize the intelligent identification of garbage types and automatically complete the classification processing of garbage. In the future of the popularization of artificial intelligence, the design has certain innovative value, practical value and scientific research value. Keywords: STM32F103 microcontroller Image processing module Garbage sorting Smart trash can
Openmv
1 Introduction With the development of urbanization and the continuous innovation of technology, residents’ domestic waste is no longer useless waste. It has huge development value. However, most of the classified garbage bins on the streets of many cities are just ordinary garbage bins with labels, which need to be manually identified. Coupled with the fact that many people do not know the knowledge of garbage classification and cannot put it in correctly, this makes garbage classification not achieve the desired effect. Therefore, the intelligentization of garbage classification has become an inevitable trend. In recent years, domestic and foreign scientific and technical personnel have conducted more research on garbage classification. Fan Xiao has conducted research on waste recycling and classification. The system mainly uses metal sensors, infrared tube © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1–8, 2021. https://doi.org/10.1007/978-981-33-4572-0_1
2
H. Ma et al.
sensors, etc. to identify and classify garbage, which can basically achieve some simple garbage classification [1]. Wu Bicheng used convolutional neural networks to study garbage classification [2]. Lecun et al. proposed Convolutional Neural Network (CNN), which reduced the data processing of the model and increased the rate of image processing. This kind of convolutional neural network is quickly applied to areas that need to be recognized, such as garbage recognition, face recognition, etc. [3]. Aiming at solving the problems above, this paper designs an OPenMV-based intelligent garbage sorting and processing system [4]. The system uses the SSD algorithm to train the model under the Caffe framework, so as to realize the identification and classification of garbage types and accurately classify a single garbage [5]. At the same time, the system can popularize people’s understanding of garbage classification through voice interaction, and promote garbage classification in society, which has certain practical significance and research value.
2 Garbage Type Recognition Algorithm SSD is a popular and powerful target detection network. The network structure includes a basic network, auxiliary convolutional layer and predictive convolutional layer [6]. The role of the basic network is to extract low-scale feature maps. The role of the auxiliary convolutional layer is to extract high-scale feature maps. The role of the predictive convolutional layer is to output the location information and classification information of the feature maps. The algorithm flow of the SSD network mainly includes the training phase and the prediction phase (Figs. 1 and 2). Batch image data
Basic convolution layer
Secondary convolution layer
Prediction of convolution
Loss function calculation
Fig. 1. Block diagram of the training phase
Batch image data
Basic convolution layer
Secondary convolution layer
Prediction of convolution
Target detection and matching
Fig. 2. Block diagram of the prediction phase
The prediction layer predicts the rectangular box information and classification information of each point in the map. The loss value at this point is equal to the sum of the loss of the rectangular box position and the loss of the classification. First, we calculated the intersection ratio between the prior box and the real box for each point of the map. If the orthogonal ratio was greater than the set threshold, the prior box was the same as the class marked by the real box, which is called the positive class; If the orthogonal ratio is less than the set threshold, the class marked by the a priori box was
Intelligent Garbage Classification System Based on Open MV
3
considered to be the background, which is called a negative class. Then the prediction layer output the prediction box of each point of the map, and the label of the prediction box was the same as the label of the a priori box. The loss function of the predicted box and the real box is equal to the sum of the loss of the predicted box position and the loss of classification.
3 System Scheme Design The system used STM32F103RCT6 as the core main control chip, which had the characteristics of high performance, low cost and low power consumption. The system used the OpenMV module in the garbage identification part. Through training and using the cifar10 neural network garbage data set, the garbage was classified and identified. After the type of garbage was identified, the OpenMV module established serial communication with the main control part of the microcontroller. Then, the microcontroller received the identification through the serial port. So as to control the voice recognition chip LD3320 to broadcast the corresponding garbage types, and controlled the rotation of the motor to make the inner barrel accurately rotate to the corresponding position. After the rotation of the inner bucket was completed, the system controlled the rotation of the steering gear board containing the garbage through the single-chip microcomputer, so that the garbage fell into the corresponding inner bucket. The accurate classification of the garbage was realized. The system structure design diagram is shown as in Fig. 3. The description of the serial number in the picture below: 1-inner tube, 2-camera, 3trash identification box, 4-square shaft, 5-solar panel, 6-outer barrel, 7-barrel cover, 8stepping motor, 9-square hole bearing, 10- Main control box.
4 System Hardware Circuit Design This system was mainly composed of the minimum system circuit of STM32 single chip microcomputer, OpenMV garbage recognition circuit, ultrasonic ranging circuit, power supply circuit, voice interaction circuit and control circuit, as shown in Fig. 4. The whole system mainly used the smallest system of STM32F103 microcontroller as the main processor. Established serial communication with the MCU through the OpenMV module to control the rotation of the motor, The rotation of the steering gear, and the interaction of voices. 4.1
Microprocessor Circuit
The main control chip of this system adopted STM32F103RCT6 single-chip microcomputer. The main control chip was chosen because of its high performance, low cost and low power consumption. The design of this system mainly used it as the main control chip to receive and process data through the serial port to realize the entire system function.
4
4.2
H. Ma et al.
Open MV Garbage Recognition Circuit Design
The OpenMV camera was a small, low-power, low-cost circuit board that can implement processing algorithms through the high-level language Python. On the small hardware module, the core machine vision algorithm was efficiently implemented in C language, and the Python programming interface was provided. Python’s high-level data structure made it easy to handle complex output in machine vision algorithms. And can fully control OpenMV, including ten IO pins.
Voice interaction circuit
Infrared detection circuit Ultrasonic circuit
UART
STM32 F103 Steering gear control circuit
Power circuit
Solar module
Openmv circuit
Indicator circuit
Fig. 3. System structure design diagram
4.3
Fig. 4. Hardware circuit design
Ultrasonic Distance Measuring Circuit Design
This part of the design was mainly used to judge whether the garbage in the bin was full. Because the height of the garbage bin designed by this system was 80 cm, we thought that when R < 25 cm, there was less garbage; when 25 cm < R < 65 cm, there was more garbage; when 65 cm < R, the garbage was full. When it was detected that the garbage was full, the single-chip microcomputer would control the red LED light to emit light to remind people to clean up the garbage in time. 4.4
Power Circuit Design
The power circuit of the system consisted of two parts, one was to convert 5 V to 3.3 V for use by STM32 single-chip microcomputer, and the other was to convert 12 V direct current converted from solar energy to 5 V direct current for infrared sensor and ultrasonic module. 4.5
Voice Interactive Circuit Design
This design used the voice recognition chip LD3320, which used ASR technology to provide a voice-based user interface VUI that was free of buttons, keyboard, mouse, touch screen and other GUI operations, making the user’s operation of the system easier faster and nature [7, 8]. Just transferred the recognized key words into the chip in the form of character strings, and it could take effect immediately in the next recognition. For example: the user asks “what kind of garbage the mineral water bottle
Intelligent Garbage Classification System Based on Open MV
5
belongs to”, the system answers “recyclable garbage”, or when the user puts the mineral water bottle into the bucket, the system will announce “recyclable garbage”. 4.6
Control Circuit Design
4.6.1 Circuit Design of Steering Gear The steering gear baffle was composed of two steering gears of the same model and a baffle that could rotate on one side. In order to make the steering gear meet the system functions, the TD-8120MG steering gear was adopted. This kind of steering gear contained advanced ASIC IC to process the signal of the single-chip microcomputer. It had strong resolution and larger torque, which could fully support the identification gear. When the system was working, the two steering gears used the same single-chip microcomputer signal, so that the two steering gears could be synchronized. 4.6.2 Motor Drive Circuit Design The motor drive circuit part adopted the A4988 module. The A4988 module had a converter and a DMOS micro-step driver with overcurrent protection, which could operate bipolar stepping motors in multiple stepping modes. Just input a pulse in the “step” input to drive the motor to generate microsteps. There was no need for phase sequence table, high frequency control line or complicated interface programming [9]. The A4988 interface was very suitable for applications where complex microprocessors were unavailable or overloaded.
5 System Software Design 5.1
Neural Network Model Training Under Caffe Architecture
The system used the OPenMV module for recognition, so it was necessary to train the Caffe model, and then converted it into a network that could be recognized by the OPenMV module. Using Caffe and python, the 64 64 pix garbage sample pictures were stored in groups according to types, the augment_images.py function was used to expand the capacity of the garbage samples, and then the expanded results were stored in the root directory of the training network, so that the garbage recognition sample data set was completed. Then used the memory-mapped database manager LMDB to make data tags to increase the reading speed of Caffe, Then using Caffe’s own function convert_imageset to generate LMDB format data, and completing the training of the neural network model through the OpenMv module. After the neural network model was trained, it needed to be reduced to an appropriate size. Therefore, a quantitative script method was used to convert the Caffe model weights and activations from a 32-bit floating point format to an 8-bit fixedpoint format. Finally, the OpenMV NN converter script was used to convert the model into a binary format. The converter script would output the code of each layer type, as well as the dimension and weight of the layer. On the OpenMV Cam, the firmware was read as a binary file, and a linked list data Structure was used to construct the network in the memory, so the model can be run by the OpenMV Cam.
6
5.2
H. Ma et al.
OpenMV Garbage Classification Program Design
The system first performed the initialization operation, and then the OPenMV module used the above-trained model to identify the type of garbage. If the system recognized the garbage as hazardous garbage, the system started to quantify the data and assign the data “3” to the variable a. Then the MCU and OpenMV began to establish serial communication [10]. OpenMV started to send data a to the single-chip microcomputer, until the transmission was completed, OpenMV started to continue to detect the type of garbage (Figs. 5 and 6). Start Start
System Initialization
Collection data set
Put in the waste to be tested
Training network Judge garbage type?
Quantitative model
Model binary transformation
Recyclable waste
Kitchen Waste
Harmful Waste
Other Waste
a=“1”
a=“2”
a=“3”
a=“4”
Yes
Establish serial communication
OpenMV deployment model
Send a value
No Send completed?
End
Fig. 5. Caffe model training flowchart
Fig. 6. Flow chart of garbage detection
6 System Debugging Supplied power to the system, and changed the distance between the human body and the trash can to detect whether the system opened the trash can correctly. Through Openmv, it could train 4 groups of garbage samples, waste paper (recyclable garbage), vegetable leaves (kitchen waste), batteries (hazardous garbage), plastic bags (other garbage). Collected 100 pictures per group and stored them in the garbage sample library. The system identified each type of garbage 500 times, and the following table was obtained after practical statistics (Tables 1 and 2). Table 1. Identifying garbage statistics Waste paper Vegetable leaf Battery Plastic bag Number of successes 450 times 460 times 477 times 482 times Success rate 90% 92% 95.4% 96.4%
Intelligent Garbage Classification System Based on Open MV
7
Table 2. Inner barrel rotation statistics Waste paper Vegetable leaf Battery Plastic bag Correct times 499 times 500 times 500 times 500 times Success rate 99% 100% 100% 100%
It could be seen from the above table that the system’s function of identifying single garbage was stable and reliable, and the recognition success rate was greater than 90%. The success rate of the system’s correct rotation was greater than 99%, and it had the ability to rotate reliably and accurately.
7 Conclusion The OPenMV-based intelligent garbage sorting and processing system designed in this paper mainly uses OPenMV’s cifar10 network for machine learning. At the same time, it used STM32 single-chip microcomputer as the main control chip to control the rotation of the motor and the steering gear, and realized the intelligent identification of the trash can. After repeated practice, the system can basically meet the classification and treatment of single waste, and has certain market research value and application value. Acknowledgements. Fund project: innovation training project support for college students in Liaoning province (202012026167).
References 1. Xiao, F., Fuyan, G., Rui, G., Heng, Z.: Design of automatic sorting garbage collection bin based on STM32 control. Sci. Technol. Innov. Appl. 25 (2017). (in Chinese) 2. Wu, B., Deng, X., Zhang, Z., Tang, X.: An intelligent garbage classification system based on convolutional neural networks. Phys. Exp. 39(11). (in Chinese) 3. LeCun, Y., Chopra, S., Hadsell, R.: A Tutorial on Energy-Based Lead Learning. MIT Press, Cambridge (2006) 4. Jeeva, B., Sanjay, V., Purohit, V., Tauro, D.O., Vinay, J.: Design and development of automated intelligent robot using OpenCV. In: International Conference on Design Innovations for 3Cs Compute Communicate Control (2018) 5. Luo, Q., Ma, H., Tang, L., Wang, Y., Xiong, R.: 3D-SSD: learning hierarchical features from RGB-D images for a modal 3D object detection. Neurocomputing 378(22), 364–374 (2020) 6. Kelly, J.D., Hedengren, J.D.: A steady-state detection (SSD) algorithm to detect nonstationary drifts in processes. J. Process Control 23(3), 326–333 (2013) 7. Jiang, H., Chen, Z.: Small intelligent home system with speech recognition based on ARM processor. In: Proceedings of 2019 the 9th International Workshop on Computer Science and Engineering (WCSE 2019). Science and Engineering Institute (SCIEI): Science and Engineering Institute (SCIEI), pp. 540–545 (2019)
8
H. Ma et al.
8. Wang, L.: Design of speech recognition system based on LD3320 chip. In: Proceedings of 2016 3rd International Conference on Materials Engineering, Manufacturing Technology and Control (ICMEMTC 2016). Computer Science and Electronic Technology International Society, pp. 205–208 (2016) 9. Khazaee, A., Zarchi, H.A., Markadeh, G.R.A.: Real-time maximum torque per ampere control of brushless DC motor drive with minimum torque ripple. IEEE Trans. Power Electron. 35(2), 1194–1199 (2020) 10. Junlin, Y., Kai, F., Kaipeng, W.: Intelligent recognition mobile platform based on STM32. In: International Conference on Circuits, Systems and Devices (2019)
Deformation Monitoring Method of Railway Buildings Based on 3D Laser Scanning Technology Yang Yu and Fengqin Zhang(&) Department of Urban Rail Transit, Shandong Polytechnic, Jinan 250104, China [email protected]
Abstract. The railway building is easily deformed during the long-time operation. In order to ensure the safety of the railway building, it is necessary to monitor the deformation of the railway building. A deformation monitoring method of railway buildings is proposed based on 3D laser scanning technology. Keywords: 3D laser scanning technology monitoring
Railway building Deformation
1 Introduction The difficulty and workload of railway deformation monitoring are raised gradually. It is necessary to study an effective railway deformation monitoring technology to ensure the safety of railway operation. It has great significance to study the deformation monitoring technology of railway buildings [1]. The deformation monitoring of railway buildings is taken based on the digital imaging of railway buildings. Traditionally, the digital calibration instrument is used to monitor the deformation of railway buildings, and CCD array sensors are used to obtain the image of the coded leveling scale [2]. According to the image processing technology, the reading of the leveling scale is obtained, and the display of the scale image processing and its processing results are accomplished by the computer built in the instrument [3]. In reference [4], a local region energy minimization algorithm based on image segmentation is proposed. The deformation points of railway buildings can be monitored, but the method is easy to fall into the local optimal solution in the process of image segmentation, and the accuracy of monitoring the deformation of railway buildings is not good [4]. In order to solve the above problems, this paper presents a deformation monitoring technology for railway buildings based on 3D laser scanning technology.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 9–16, 2021. https://doi.org/10.1007/978-981-33-4572-0_2
10
Y. Yu and F. Zhang
2 Image Acquisition and Preprocessing 2.1
3D Laser Scanning Imaging Acquisition of Railway Buildings
The 3D laser scanning image processing method is used to monitor the deformation of railway buildings. Firstly, the 3D laser scanning imaging model of railway buildings is constructed, and the 3D laser scanning method is used for the railway buildings’ imaging, domain feature segmentation and region information fusion. Assuming that the pixel information feature quantity of 3D laser scanning imaging of railway buildings is fXðtÞ; t ¼ 0; 1; ; L 1g. Xð0Þ, XðPÞ, Xð2PÞ, …, XððN 1ÞpÞ are selected as 3D laser scanning points, every p ¼ NL training vector is selected as a local feature, and the 3D laser scanning of railway buildings is carried out [5]. The weight coefficients of the scanning are as follows: xj ¼ ðx0j ; x1j; ; xk1;j ÞT
ð1Þ
The image is divided into regions, and the pixel level vision difference of 3D laser scanning imaging of railway buildings is obtained by histogram estimation method. In the fitting process of 3D laser scanning imaging surfaces of railway buildings, several parts are firstly found. The optimal solution is to reconstruct the digital scanning image by estimating the region of the key information points, and obtain the characteristic density model for the key information points of the railway buildings. 3D laser scanning imaging of railway buildings is as: hðtÞ ¼
X
ai ðtÞdðt iTS Þ þ r
ð2Þ
i
Because the pixel distortion of 3D laser scanning imaging template of railway buildings, the sensitivity difference of monitoring the key information points of railway buildings is quite different, the difference compensation is carried out in the sensitive domain, and the 3D laser of railway buildings is compensated [6]. The binary separation results of 3D laser scanning images of railway buildings are obtained by using empirical morphological segmentation of dissimilar feature matching method: Dataðx; y; d ðx; yÞÞ ¼ juðx d ðx; yÞ; yÞ ~ uðx; yÞj2
ð3Þ
In the formula, ~u is a reference image of a 3D laser scanning of a railway building, and u denotes a 3D laser scanning binary image of railway building. 2.2
Noise Reduction of Railway Buildings
On the basis of laser image acquisition of railway building by using 3D laser scanning technology, noise reduction and median filtering of 3D laser scanning image of railway building are carried out [7]. The vector quantization information of 3D laser scanning
Deformation Monitoring Method of Railway Buildings
11
imaging of railway building is expressed as S at the moment, the data sample of Im aðxt Þ ¼ pðxt d0;;t Þ, and 3D laser scanning imaging point of railway building is: St ¼ stj ðxtj ; wtj Þ; i ¼ 1; 2; ; N
ð4Þ
For t ¼ 0; 1; ; k, the overlapping sub-blocks of 3D laser scanning image data of railway buildings are described as follows: xk ¼ f fxk1 ; uk1 ; wk1 g
ð5Þ
In the formula, uk indicates that the state input data of the 3D laser scanning image of railway buildings are enhanced by median filtering, and the noise is eliminated by the median filter [8]. The component vector of the noise is: zk ¼ hðxk ; M; vk Þ
ð6Þ
By constructing a regional confidence transfer function model of 3D laser scanning images of railway buildings for blind separation of noise points, the expression is Im aðxt Þ ¼ pðxt jzt ; ut1 ; zt1 ; ; u0 ; z0 Þ
ð7Þ
The grayscale pixel features are extracted from the critical information points of railway buildings on the edge of the laser image as follows: ð0Þ
aðcj Þ ¼ A3 ecj =T3 ; fcj
¼ 0; j ¼ 0; 1; ; N 1g
ð8Þ
According to the direction of image texture gradient, the key information points of railway buildings are monitored accurately [9]. The distribution interval of key information points of railway buildings is expressed as: " # x cos h ¼ sin h y
sin h cos h
" # n g
ð9Þ
Where h ¼ arctanð
@u @y
@u Þ @x
ð10Þ
The 3D laser scanning image denoising of railway buildings is carried out by preset monitoring threshold.
12
Y. Yu and F. Zhang
3 Realization of Image Segmentation and Deformation Monitoring 3.1
3D Laser Scanning Image Segmentation of Railway Buildings
On the basis of the above 3D laser scanning image acquisition and median filter processing of railway building, the deformation monitoring technology of 3D laser scanning image is optimized and a new method based on 3D laser is proposed. The deformation monitoring technology of railway building based on block segmentation and feature fusion of scanning image is used to segment 3D laser scanning image after filtering by subspace decomposition method, and to decompose multi-wheel space and iterate multi-wheel [10]. 3D laser scanning images of railway buildings are enhanced as follows: First round prediction of 3D laser scanning images: xð2k þ 1Þ ¼ ðxð2kÞ þ xð2k þ 2ÞÞ a þ xð2k þ 1Þ
ð11Þ
First round iteration: xð2kÞ ¼ ðxð2k 1Þ þ xð2k þ 1ÞÞ b þ xð2kÞ
ð12Þ
Second round prediction: xð2k þ 1Þ ¼ ðxð2kÞ þ xð2k þ 2ÞÞ c þ xð2k þ 1Þ
ð13Þ
Second round iteration: xð2kÞ ¼ ðxð2k 1Þ þ xð2k þ 1ÞÞ d þ xð2kÞ
ð14Þ
After two rounds of prediction and iterative calculation for 3D laser scanning image imaging of railway buildings, the holographic texture features are obtained, and the key information points of railway buildings are obtained by dividing the grayscale areas between the two points. The maximum value of gray value is: pixel A¼maxð
8 X
ðQ PÞÞ
ð15Þ
i¼1
Yager statistical characteristic function is used to record the key information points in the image of railway buildings. In the 4 4 sub-region constructed, the difference point of the background area G of the key information points in the image is obtained as Kðx0 ; y0 Þ. By segmenting the 3D laser scanning image block of railway buildings, the difference between the edge gray value and the current gray value of each scale of the image is taken as the statistic of deformation monitoring.
Deformation Monitoring Method of Railway Buildings
3.2
13
Realization of Deformation Monitoring of Railway Buildings
Combined with the key information point detection method, the deformation features are extracted, and the deformation feature points of railway buildings are fitted by feature fusion method. The grayscale values of 3D laser scanning images of railway buildings are extracted and expressed as follows: EconðviÞ ¼ jEdgeGrayðiÞ GrayðiÞj
ð16Þ
The block segmentation of 3D laser scanning image adopts the subspace decomposition method, and uses the multi-wheel space decomposition method to enhance the 3D laser scanning image of the railway building. It is assumed that the pixel of the railway building deformation monitoring area is distributed independently in the 3 3 sub block, the characteristics are defined as: Pðxw3 ; yw3 jHÞ ¼
K YY xi 2w3 k¼1
ak gðxij ; yij jlk ; r2k Þ
ð17Þ
In the formula, w3 represents a 3 3 pixel block region of a 3D laser scanning image. H represents the set of key information points of the railway building with unknown parameters of the airport distribution, which is: H ¼ a1 ; a2 ; ; aK ; l1 ; l2 ; ; lK ; r21 ; r22 ; ; r2K
ð18Þ
Thus, the energy discriminant function in the subspace of the regional features of the key information points is obtained by 3D laser scanning images of railway buildings: PðYÞ ¼
P
exp b Vc ðYÞ cC P P exp b Vc ðYÞ Y
ð19Þ
cC
Finally, the key information points of railway buildings are calibrated as follows: wði; jÞ ¼
1 dði; jÞ expð 2 Þ ZðiÞ h
ð20Þ
Where ZðiÞ ¼
X j2X
expð
dði; jÞ Þ h2
ð21Þ
According to the accurate calibration of the deformation point, the accurate monitoring of the deformation position of the railway building is realized.
14
Y. Yu and F. Zhang
4 Simulation Experiment In order to test the application performance of the proposed method in realizing deformation monitoring of railway buildings, the simulation experiment is carried out. The experiment is designed by Matlab, and the acquisition standard of 3D laser scanning is the national standard IEC/EN 61496. The image pixel of railway 3D laser scanning is 1000 800 pixels, the gray value of 3D laser scanning image is 250 250, the total gray level is 24, and the resolution of railway deformation monitoring is 4 m. According to the above simulation environment, the 3D laser scanning image is obtained by simulation of railway deformation monitoring and parameter setting, as shown in Fig. 1.
Fig. 1. Original railway building image
The 3D laser scanning imaging of railway buildings shown in Fig. 1 is taken as the research sample, the image processing is carried out, and the image denoising results are obtained as shown in Fig. 2.
Fig. 2. Image denoising results
Feature fusion method is used to locate the deformation feature points of railway buildings, and the image output of deformation monitoring is shown in Fig. 3.
Deformation Monitoring Method of Railway Buildings
15
Fig. 3. Deformation monitoring output image
The analysis of Fig. 3 shows that the deformation monitoring of railway buildings can be detected by the proposed method, the deformation location can be detected accurately and the accuracy of deformation monitoring of railway buildings is improved. In order to locate and compare the monitoring performance, the positioning accuracy of the deformation part is measured as the test index, and the comparison results are shown in Fig. 4.
Accurate probability of location of deformation site
1 0.95 0.9 0.85 0.8 0.75 New method Traditional method
0.7 0.65 0.6 0.55 10
20
30
40
50 60 70 Iterative step number
80
90
100
Fig. 4. Comparison of deformation positioning accuracy of railway
Figure 4 shows that the positioning accuracy and error of deformation monitoring of railway buildings by using the proposed method are high and low.
5 Conclusions In this paper, a deformation monitoring method of railway buildings is proposed based on 3D laser scanning technology. The laser image of the railway building is collected by the 3D laser scanning technology, the 3D laser railway building image is denoised,
16
Y. Yu and F. Zhang
and the deformation features are extracted by the key information point detection method. The feature fusion method is used to carry out the 3D fitting processing of the deformation feature points of the railway building, and the multi-wheel space decomposition method is used to enhance the 3D laser scanning image of the railway building, and the deformation monitoring of railway is realized. The simulation results show that the positioning accuracy of the railway building’s deformation monitoring by the proposed method is high, the error is small, the accuracy performance of the deformation location is better, and it has good application value in deformation monitoring of railway.
References 1. Hongdao, F., Yingyue, Z., Maosong, L.: Speckle suppression algorithm for ultrasound image based on Bayesian nonlocal means filtering. J. Comput. Appl. 38(3), 848–853 (2018) 2. Ramos-Llorden, G., Vegas-Sanchez-Ferrero, G., Martin-Fernandez, M., et al.: Anisotropic diffusion filter with memory based on speckle statistics for ultrasound images. IEEE Trans. Image Process. 24(1), 345–358 (2015) 3. Zhou, Y.Y., Zang, H.B., Zhao, J.K., et al.: Image recovering algorithm for impulse noise based on nonlocal means filter. Appl. Res. Comput. 33(11), 3489–3494 (2016) 4. Sudeep, P.V., Palanisamy, P., Rajan, J., et al.: Speckle reduction in medical ultrasound images using an unbiased non-local means method. Biomed. Signal Process. Control 28(6), 1–8 (2016) 5. Zhan, S., Xin, Yu., Yu’e, S., He, H.: Task allocation mechanism for crowdsourcing system based on reliability of users. J. Comput. Appl. 37(9), 2449–2453 (2017) 6. Cheung, M.H, Southwell, R., Hou, F., et al.: Distributed time-sensitive task selection in mobile crowdsensing. In: Proceedings of the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing, pp. 157–166. ACM, New York (2015) 7. Rui, L.L., Zhang, P., Huang, H.Q., et al.: Reputation-based incentive mechanisms in crowdsourcing. J. Electron. Inf. Technol. 38(7), 1808–1815 (2016) 8. Zhang, Y., Jiang, C., Song, L., et al.: Incentive mechanism for mobile crowdsourcing using an optimized tournament model. IEEE J. Sel. Areas Commun. 35(4), 880–892 (2017) 9. Ono, T., Kimura, A., Ushiba, J.: Daily training with realistic visual feedback improves reproducibility of event-related desynchronisation following hand motor imagery. Clin. Neurophysiol. 124(9), 1779–1786 (2013) 10. Shuai, R., Zhang Tao, X., Zhenchao, W.Z., Yuan, H., Yunong, L.: Information hiding algorithm For 3D models based on feature point labeling and clustering. J. Comput. Appl. 38(4), 1017–1022 (2018)
Research on Real-Time Compression and Transmission Method of Motion Video Data Under Internet of Things Liang Hua(&), Jiayu Wang, and Xiao Hu State Grid Jiangsu Electric Power Co., Ltd., Wuxi Power Supply Branch, Wuxi 214000, China [email protected]
Abstract. In order to improve the real-time transmission ability of motion video data, it is necessary to compress video data. A real-time compression method for motion video data based on wavelet analysis and vector quantization is proposed. The two-dimensional wavelet transform is used to decompose the motion video and transform the time and frequency domain, and the quantization error is used to compensate for the video data. According to the method, the motion video data under the Internet of things are processed by LBG vector quantization, and the method of error compensation coding is used to smooth the noise of the motion video data under the Internet of things. The motion video of the N codebook is coded and combined with the multi-layer wavelet scale decomposition method, the real-time compression of the motion video data under the Internet of things is realized. Simulation results show that the proposed method can achieve better real-time compression and transmission of motion video data, and lower output error rate. Keywords: Internet of Things
Motion Video Compression Coding
1 Introduction With the rapid development of communication technology, multimedia has been integrated into people’s life and work. According to the present situation of multimedia communication and the trend of future development, the storage and transmission of digital video information in compressed form will be the only way in a long period of time. The basic idea of data compression coding is to reduce the correlation of video data as much as possible under the premise of ensuring visual effect, that is to say, to remove redundant information [1]. The so-called redundant information of video data is mainly aimed at the redundancy of space, time and vision. In essence, video compression is to reduce the amount of redundancy, so as to contain the largest information with the smallest symbol. For the original data transformation, quantization and entropy coding, the redundancy of video data is eliminated, to achieve the purpose of compression [2]. The compression and processing of motion video data in the Internet of things is affected by redundant information interference and other factors, which lead to poor © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 17–24, 2021. https://doi.org/10.1007/978-981-33-4572-0_3
18
L. Hua et al.
real-time performance of compression coding and transmission in bandwidth constrained channels. In this paper, an improved method of real-time video data compression and transmission under the Internet of things is proposed [3]. The feature decomposition and time-frequency domain conversion of motion video are carried out by using two-dimensional wavelet transform method.
2 Preprocessing of Motion Video Data Compression and LBG Vector Quantization in the Internet of Things 2.1
Motion Video Data Compression Process and Coding Ideas in the Internet of Things
This paper studies the compression method of motion video data based on improved wavelet transform and vector quantization analysis in the Internet of things. Firstly, the compressor theory and mathematical model of motion video data under the Internet of things are given. The essence of motion video compression is vector quantization coding of motion video. The coding idea is as follows: supposing that motion video data is divided into M vectors with L length under the Internet of things [4, 5], then the motion video is regarded as a series of data, and each segment has L data. The M vectors of the low frequency part of the motion video data under the Internet of things are divided into N groups, and the codebook size is N, Wj ðj ¼ 0; 1; 2 NÞ, represented by Yj in group j, j ¼ 0; 1; N 1, compresses and encodes motion video. Without changing the space partition, the M data becomes the center of the M data vector correction group, and the number of bits required for the new pixels of motion video data under the Internet of things is expressed as follows: cn ÞÞ ¼ fbx ðsj Þg; j ¼ 1; 2; ; N bx ðPð A
ð1Þ
The total error of the new codebook for the current vector space partition is minimized, that is: 1 X bx ðsj Þ ¼ xi sj xi 2sj
ð2Þ
The quantization error compensation method is used to improve the LBG vector quantization algorithm [6], and the structure block diagram of motion video data compression under the Internet of things is shown in Fig. 1. 2.2
LBG Vector Quantization Processing of Motion Video Data
Combined with the above motion video compression process and coding idea, the LBG vector quantization method is used to preprocess the motion video data under the Internet of things, and for a group of motion video data sequences under the Internet of c0 are grouped, and the vector symbols are things. The elements of adjacent pixels in A
Research on Real-Time Compression and Transmission Method LBG Motion video
Quantization processing
Quantization error
19
Video compression
LBG
Error quantization
Fig. 1. Structure block diagram of motion video data compression in the internet of things
modified by the center vector matching iteration method [7]. The steps are described as follows: Step 1: Given the quantization coding N of motion video data in the Internet of things, the independent threshold value e is designed, and the training sample of motion video data under the Internet of things is fxj g, j ¼ 0; 1; ; m 1. The code book of N level training sample is bought by dividing the motion video into approximate and detailed parts: c A0 ¼ fyi g; i ¼ 1; 2; ; N
ð3Þ
Where, n ¼ 0, D1 ¼ 1. cn ¼ fyi g, i ¼ 1; 2; ; N, the wavelet decomposition is used to Step 2: Given A remove the high frequency fxj g, j ¼ 0; 1; ; m 1 of motion video data from the cn is obtained Internet of things, the quantization error of the lossless motion video on A as follows: cn Þ ¼ fsi g; i ¼ 1; 2; ; N Pð A
ð4Þ
Where, si ¼ fxj : dðxj ; yi Þ dðxj ; yl Þg represents the corresponding position in the codebook. For the noise l ¼ 0; 1; ; N motion video data in the Internet of things, the total average distortion is calculated as follows: m1 1X Dn ¼ DðfPð c An ; c An ÞgÞ ¼ min dðxj ; yÞ m j¼0 y2A^ n
ð5Þ
Step 3: If ðDn1 Dn Þ=Dn e, stop; Output the final codebook for motion video data vector quantization under the Internet of things; otherwise, it is to continue.
3 Optimization of Real-Time Compression and Transmission of Motion Video Data In this paper, a real-time compression method for motion video data based on wavelet analysis and vector quantization is proposed. The error compensation coding is
20
L. Hua et al.
Sk ðk ¼ 1; 2; . . .; M Þ; two-dimensional wavelet decomposition is adopted to obtain a series of high-resolution sub-motion video, which is used as the subspace of wavelet domain [8]. The we codemat wavelet function is selected by the middle and twodimensional wavelet function, and the expression is: 8 wi;j > < wi pi;j ðAÞ ¼ 0 P w > : j:ei;j 2A i;j 1 wi
if i 6¼ j and ei;j 2 A if i ¼ 6 j and ei;j 62 A
ð6Þ
if i ¼ j
The subvector details of wavelet domain are smoothed by error compensation coding, and the low frequency coefficients of motion video are extracted by appcoef 2 function: 8 2 39 > > ! * = < 6 Wi Sk W 7 i Wj @ ¼ arccos max4 ! * 5 ; > : Wi Sk Wi Wj >
ð7Þ
Where, k ¼ 1; . . .; M; i; j 2 f1; ; N g; i \ j, low-frequency coefficients are extracted from N-level codebook by two-stage wavelet transform. The subvector is decomposed in wavelet domain according to the obtained S, the low-frequency part is selected as subvector in two-dimensional wavelet domain, and the detail part of subvector in wavelet domain is taken [9]. The noise of motion video data under the Internet of things is smoothed by error compensation coding method, and the low frequency coefficients are extracted by two-stage wavelet transform for N-level codebook: I¼
n20 n02 n211 n400
ð8Þ
The feature decomposition and time-frequency domain transformation of motion video are carried out by using two-dimensional wavelet transform method [10]. The sub-vectors of wavelet domain are obtained as follows:
if Dif ðC1 ; C2 Þ [ MIntðC1 ; C2 Þ otherwise
ð9Þ
MIntðC1 ; C2 Þ ¼ minðIntðC1 Þ þ sðC1 Þ; IntðC2 Þ þ sðC2 ÞÞ
ð10Þ
DðC1 ; C2 Þ ¼
true false
For the removal of various information redundancy, the noise of motion video data under the Internet of things is smoothed effectively, and the quantized index is processed. The codebook and residuals are coded numerically, and the spatial information, visual redundancy and structural redundancy of motion video are removed according to
Research on Real-Time Compression and Transmission Method
21
the improved wavelet domain subvector features, so that the removed spatial information of motion video is obtained. Visual information and structural information are: mpq ¼
M X N X
xp yq f ðx; yÞ
ð11Þ
ðx x_ Þp ðy y_ Þq f ðx; yÞ
ð12Þ
m¼1 n¼1
lpq ¼
M X N X m¼1 n¼1
The feature compression of motion video data under the Internet of things is carried out, and the gray pixel feature c of motion video data under the Internet of things is obtained from the following formula: c¼
m X
PðzðkÞ=mj ðkÞ; zk1 ÞPðmj ðkÞ=zk1 Þ ¼
m X
j
Kj ðkÞcj
ð13Þ
j
The wavelet threshold compression of each layer is separated, and the feature decomposition and time-frequency domain conversion of the motion video are carried out by using the two-dimensional wavelet transform method. The original wavelet threshold compression library is formed as follows: n P
vi ¼
k¼1
ð1 ð1 uaik Þ1=a Þm ðxk þ bxk Þ
ð1 þ bÞ
n P k¼1
ð14Þ
ð1 ð1 uaik Þ1=a Þm
For a pair of motion video data gðx; yÞ under the Internet of things, M−1 transfer iteration is carried out, and the iterative recursive formula is: di þ 1 ¼ 2Fðxi þ 1 þ
1 ; yi þ 2Þ 2
ð15Þ
The motion video of N codebook is coded, the feature intensity is improved by the feature extraction method of autocorrelation accumulation, and the highlight factor is normalized: w ðkÞ ¼ wðkÞ=kwðkÞk
ð16Þ
Wavelet threshold compression method is used to calculate the square difference of the change of gray value before and after the translation of the motion video data in the sub-region. (
Vid ðt þ 1Þ ¼ W Vid ðtÞ þ C1 R1 ðPdbest ðtÞ Pdi ðtÞÞ Pdi ðt þ 1Þ ¼ Pdi ðtÞ þ Vid ðt þ 1Þ
ð17Þ
22
L. Hua et al.
Arithmetic coding of N codebook’s motion video is carried out. Combined with multi-scale wavelet scale decomposition method, real-time compression of motion video data is realized.
4 Simulation Experiment and Result Analysis In order to verify the performance of this method in real-time compression of motion video data, simulation experiments are carried out. The experiment adopts Matlab 7, the characteristic sampling time of motion video data is T ¼ 0:04, wavelet threshold parameter R = 0.17827, G = 0.0207, B = 0.5148, block model structure of motion video data under intercepting the Internet of things as motion video data, in order to evaluate the compression and recovery quality of motion video, the value of the peak signal to noise ratio (PSNR) and the mean square error (MSE) of the pixel value are used as the evaluation index. The original motion video is shown in Fig. 2.
Fig. 2. Motion video
This method is used to compress the motion video data in real time, and the compressed video data is shown in Fig. 3.
Fig. 3. Video data compression output
Figure 3 shows that using this method to compress the motion video data, the noise smoothing performance of the motion video data is better. Test the output bit error rate (BER) of different methods for motion video data compression. The comparison results are shown in Fig. 4. The results shown in Fig. 4 show that the proposed method is used to compress motion video data, it can improve the output SNR and reduce the BER.
Research on Real-Time Compression and Transmission Method
23
0.45 0.4 reference [5] Method in reference[5] Proposedmethod method Proposed in reference reference[8] Method in [8]
0.35 0.3
BER
0.25 0.2 0.15 0.1 0.05 0 -10
-8
-6
-4
-2
0
2
4
6
8
10
SNR/dB
Fig. 4. Output bit error rate comparison
5 Conclusions A real-time compression method for motion video data based on wavelet analysis and vector quantization is proposed in this paper. The two-dimensional wavelet transform is used to decompose the motion video and transform the time and frequency domain, and the quantization error is used to compensate for the video data. Simulation results show that the proposed method can achieve better real-time motion video data compression and transmission, the output error rate is lower than that of the traditional method.
References 1. Shuai, R., Zhang Tao, X., Zhenchao, W.Z., Yuan, H., Yunong, L.: Information hiding algorithm for 3D models based on feature point labeling and clustering. Journal of Computer Applications. 38(4), 1017–1022 (2018). (in Chinese) 2. Wen, X., Ling, Z., Yunhua, C., Qiumin, J.: Single Image super resolution combining with structural self-similarity and convolution networks. Journal of Computer Applications. 38(3), 854–858 (2018). (in Chinese) 3. Weisheng, D., Lei, Z., Guangming, S.: Nonlocally centralized sparse representation for image restoration. IEEE Trans. Image Process. 22(4), 1620–1630 (2013) 4. Tomer, P., Michael, E.: A statistical prediction model based on sparse representations for single image super-resolution. IEEE Trans. Image Process. Publ. IEEE Signal Process. Soc. 23(6), 2569–2582 (2018) 5. Tsai, Y.-Y.: An efficient 3D information hiding algorithm based on sampling concepts. Multimedia Tools Appl. 75(13), 7891–7907 (2015). https://doi.org/10.1007/s11042-0152707-1 6. Ke, Q., Dafang, Z., Dongqing, X.: Steganography for 3D model based on frame transform and HMM model in wavelet domain. J. Comput. Aided Des. Comput. Graph. 22(8), 1406– 1411 (2010). (in Chinese) 7. Shungang, H., Qing, Z., Shaoshuai, L.: 3D shape deformation based on edge collapse mesh simplification. J. Dalian Univ. Technol. 51(3), 363–367 (2011). (in Chinese)
24
L. Hua et al.
8. Mingai, L., Yan, C., Jinfu, Y., Dongmei, H.: An adaptive multi-domain fusion feature extraction with method HHT and CSSD. Acta Electronica Sinica 41(12), 2479–2486 (2013). (in Chinese) 9. Florian, G., Georgios, N., Gharabaghi, A.: Closed-loop task difficulty adaptation during virtual reality reach-to-grasp training assisted with an exoskeleton for stroke rehabilitation. Front. Neurosci. 10, 518 (2016) 10. Masahiko, M., Takashi, O., Keiichiro, S., Toshiyuki, F., Tetsuo, O., Akio, K., Meigen, L., Junichi, U.: Efficacy of brain-computer interface-driven neuromuscular electrical stimulation for chronic paresis after stroke. J. Rehabil. Med. 46(4), 378–382 (2014)
Design of Digital-Analog Control Algorithm for Flash Smelting Metallurgy Feng Guo(&), Qin Mei, and Da Li State Grid Jiangsu Electric Power Co., Ltd., Wuxi Power Supply Branch, Wuxi 214000, China [email protected]
Abstract. To improve the optimal access and transmission capacity of stored data in cloud storage environment, dynamic task load balancing scheduling is needed. A dynamic task load balancing scheduling method based on adaptive Potter spaced equilibrium control is proposed. The dynamic task transmission channel model under the cloud storage is constructed. The decision multimode blind equalization method is adopted to estimate the impulse response of the dynamic task transmission channel, and the adaptive path forwarding control method is combined. The multipath suppression of the dynamic task information transmission, the phase shift deviation is removed by the phase weighting method, the Doppler shift characteristic of dynamic task transmission under the cloud storage environment is calculated, and the dynamic task load balancing scheduling is implemented by the Potter spacer equilibrium method. The task load balancing scheduling has excellent adaptability and strong resistance to interference, which can reduce the bit error rate of dynamic task transmission. Keywords: Cloud storage Dynamic tasks Porter interval balancing Bit error rate
Load balancing scheduling
1 Introduction Cloud storage has been increasingly used for dynamic task information processing to improve the processing speed and capacity of data resources. Load balancing scheduling for dynamic tasks in cloud storage environment is of great significance in balancing network load and enhancing resource utilization. Therefore, the loadbalancing scheduling model of dynamic tasks in cloud storage environment is studied to enable network users to share data and resources in cyberspace more efficiently. It has a broad development prospect in promoting resource sharing and improving resource utilization [1]. The single mode decision neural fuzzy system is used to deal with the blind equalization of the channel, which improves the equalization of dynamic task scheduling. However, with the increase of the number of channels, multipath interference and phase shift are easy to occur [3]. This results in poor load balancing and scheduling of dynamic tasks. In this work, a dynamic task load balancing scheduling method based on adaptive Porter interval equalization control in cloud storage environment, constructs the dynamic task transmission channel model under cloud storage, © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 25–30, 2021. https://doi.org/10.1007/978-981-33-4572-0_4
26
F. Guo et al.
and designs the channel equalization. Combined with feature extraction and channel forwarding control strategy, dynamic task load balancing scheduling in cloud storage environment is realized. Finally, the feasibility and advantages of the proposed method are verified by simulation experiments [2].
2 Dynamic Task Transmission Channel Model and Channel Equalization in Cloud Storage Environment 2.1
Dynamic Task Transmission Channel Model
The equalization and scheduling of dynamic task transmission channel in cloud storage environment is the process of optimizing communication quality through dynamic task transmission channel in cloud storage environment. The finite impulse response model of multipath channel for dynamic task transmission in cloud storage environment is established [4]. The impulse response of channel is simulated by using complex coefficient. Firstly, the channel impulse response expression of dynamic task transmission in cloud storage environment is given as follows: hð t Þ ¼
X
ai ðtÞejhi ðtÞ dðt iTS Þ
ð1Þ
i
where hi ðtÞ expresses the path phase shift angle of the dynamic task transmission of the source data in the multimode channel under the cloud storage environment, and uses the phase weighting method to remove the path phase shift deviation, and the expression of the impulse response of the dynamic task transmission in the cloud storage environment is described as follows: X hð t Þ ¼ ai ðtÞdðt iTS Þ ð2Þ i
In cloud storage environment, the Doppler power and Doppler frequency shift spectrum of dynamic task transmission are independent of each other, and the time delay of each multi-path is integer times of TS , and the pulse response of dynamic task transmission in cloud storage environment is obtained as follows: hðsi; tÞ ¼
Nm X
ai ðtÞdðt si ðtÞÞ
ð3Þ
i¼1
The dynamic task transmission channel model in cloud storage environment is constructed. The dynamic task load balancing scheduling is implemented using the channel equalization control method.
Design of Digital-Analog Control Algorithm
2.2
27
Interference Filtering Suppression in Task Scheduling
In the process of communication transmission, inter-symbol interference (ISI), signal distortion and distortion at the receiving end are produced by dynamic task transmission coding transmission in cloud storage environment, which seriously affect the quality of dynamic task transmission [5], so it is necessary to deal with inter-symbol interference suppression. In the cloud storage environment, the dynamic task transmission channel has a serious effect on the signal distortion, so it is necessary to measure the multipath characteristics, and adopt adaptive equalization filter to modulate and demodulate the channel. After IFFT, the multipath signals of dynamic task transmission channel in cloud storage environment are obtained as follows: xk ¼
N1 X
Cn ej2pkn=N k ¼ 0; 1; ; N 1
ð4Þ
n¼0
According to the peak position and amplitude of dynamic task transmission in cloud storage environment, the propagation time and propagation loss of the corresponding path can be obtained as: R cc ðs1 ; s2 ; aÞ ¼ R cc ðs1 ; aÞdðs1 s2 Þ
ð5Þ
In discrete multipath channel, the output of side lobe beam is as follows: R cc ðs; aÞ ¼ E c ðs; tÞ c ðs; t þ aÞ
ð6Þ
Combined with adaptive path forwarding control method, the multi-path suppression of dynamic task information transmission is carried out. The link equalization control function in dynamic task transmission channel is obtained by using phase reweighting method to remove path phase shift deviation: rffiffiffiffiffi 1 f½cðv1 Þ þ cðv2 Þ2 þ ½sðv1 Þ þ sðv2 Þ2 g jsðf Þj ¼ A 2k
ð7Þ
Therefore, intersymbol interference suppression and channel equalization control for dynamic task transmission in cloud storage environment are realized.
3 Load Balancing Scheduling Model Optimization This paper proposes a dynamic task load balancing scheduling method in cloud storage environment based on adaptive Potter spaced equilibrium control. Under the cloud storage environment, R contains the trust relationship of four tuple Ei ; Ej ; d; t in the dynamic task of cloud storage, constructs a dynamic task weight allocation mechanism, and calculates the orthogonal weighted constrained equilibrium ratio, and classifies attribute weights, the efficiency function of a given cloud storage task as:
28
F. Guo et al.
8 eij eði; jÞ > > < e eði; jÞ max Eði; jÞ ¼ > eij eði; jÞ > : eði; jÞ emin
eði; jÞ\eij ð8Þ eði; jÞ eij
By extracting the temporal scale feature of cloud storage dynamic task information flow, its frequency modulation law is obtained: fi ðtÞ ¼
K t0 t
jtj
ð9Þ
T 2
The tracking speed and steady-state error of time-varying system with dynamic task transmission in cloud storage environment are controlled by fixed compensation shrinkage control, and the multi-path suppression of dynamic task information transmission is carried out by combining adaptive path forwarding control method. The adjustment function of channel equalization control is obtained as follows: JMDMMA ¼ q E½ðjzðkÞj2 RMDMMA ðkÞÞ2
ð10Þ
To reduce the steady-state error, adaptive iterative model is used to carry out dynamic task load balancing scheduling. The optimized iterative function is obtained: f ðk þ 1Þ ¼ f ðkÞ l q eMDMMA ðkÞy ðkÞ
ð11Þ
eMDMMA ðkÞ ¼ zðkÞ½jzðkÞj2 RMDMMA ðkÞ
ð12Þ
Where
Since the value of iterative step size or adaptive adjustment factor has a great influence on the equalization algorithm, the adaptive adjustment factor is processed by variable step size LMS adaptive filtering algorithm, it is: q¼
1 ; sgn(jz(k)j2 RMDMMA (k)Þ ¼ sgn(jz(k)j2 RÞ 0 ; sgnðjz(k)j2 RMDMMA (k)Þ 6¼ sgn(jz(k)j2 RÞ
ð13Þ
By using Porter interval equalization method, dynamic task load balancing scheduling is carried out. The global iterative function of dynamic task load balancing scheduling in cloud storage environment is obtained as follows: JMMDMMA
R
¼ E½ðz2R ðkÞ RMDMMA
R ðkÞÞ
2
qðkÞ
þ ½1 qðkÞ E½ðz2R ðkÞ RR Þ2
ð14Þ
Based on the above analysis, the dynamic task load balancing scheduling is realized, and the structure block diagram of the optimal scheduling model is obtained as shown in Fig. 1.
Design of Digital-Analog Control Algorithm
a (k )
y (k )
h( k )
f F ( k ) zF (k)
z(k)
aˆ(k )
(Q u )
-
29
Adaptive learning algorrithm
n( k ) zB(k)
f B (k )
e( k ) MMDMMA
Fig. 1. System structure block diagram of dynamic task load balancing scheduling
4 Simulation Simulation is carried out to validate the feasibility of the proposed method. The number of dynamic task distribution source nodes is 100, and the data source is generated by Bernoulli binary input modulated by BPSK. The sampling frequency of cloud storage dynamic task is 1200 kHz, the data length is 1024, and the symbol rate is 1kBaud. The carrier frequency of cloud storage dynamic task transmission is 10 kHz. The dynamic task transmission and load balancing scheduling in cloud storage environment are simulated, and the original dynamic task distribution is obtained as shown in Fig. 2. 2 1.5 1 0.5 0 -0.5 -1 -1.5 -2 -2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Fig. 2. Original cloud storage dynamic task distribution
Dynamic task scheduling is implemented based on proposed method and traditional method, respectively. The result is shown in Fig. 3. Figure 3 shows that the dynamic task scheduling based on proposed method has excellent balance and strong resistance to interference, with error bit rate of 0.0012, which is 13.5% lower than that of traditional method, indicating the accurate transmission ability of dynamic task is improved significantly.
30
F. Guo et al. 2
2
1.5
1.5
1
1
0.5
0.5
0
0
-0.5
-0.5
-1
-1
-1.5
-1.5
-2 -2 -1.5 -1 -0.5
0
0.5
1
(a) Traditional method
1.5
2
-2 -2
-1.5
-1
-0.5
0
0.5
1
1.5
2
(b) Proposed method
Fig. 3. Dynamic task scheduling output
5 Conclusions A dynamic task load balancing scheduling method based on adaptive Potter spaced equilibrium control is proposed in this work. The dynamic task transmission channel model under the cloud storage is constructed. The decision multimode blind equalization method is adopted to estimate the impulse response of the dynamic task transmission channel, and the adaptive path forwarding control method is combined. The multipath suppression of the dynamic task information transmission, the phase shift deviation is removed by the phase weighting method; the Doppler shift characteristic of dynamic task transmission under the cloud storage environment is calculated. The task load balancing scheduling based on proposed method has excellent adaptability and strong resistance to interference, with the bit error rate of dynamic task transmission being reduced.
References 1. Guo, H.P., Dong, Y.D., Mao, H.T.: Logistic discrimination based rare-class classification method. J. Chin. Comput. Syst. 37, 140–145 (2016) 2. Zhang, Y., Fu, P., Liu, W., Chen, G.: Imbalanced data classification based on scaling kernelbased support vector machine. Neural Computing and Applications 25, 927–935 (2014) 3. Guo, H., Liu, H., Wu, C.: Logistic discrimination based on G-mean and F-measure for imbalanced problem. J. Intell. Fuzzy Syst. 31, 1155–1166 (2016) 4. Xu, Y., Tong, S., Li, Y.: Prescribed performance fuzzy adaptive fault-tolerant control of nonlinear systems with actuator faults. IET Control Theo. Appl. 8, 420–431 (2014) 5. Huang, H., Wang, Z., Li, Y.: Design of fuzzy state feedback controller for robust stabilization of uncertain fractional-order chaotic systems. J. Franklin Inst. 351, 5480–5493 (2015)
Simulation and Prediction of 3E System in Shandong Province Based on System Dynamics Yanan Wang(&), Xinyu Liu, Wene Chang, and Miaojing Ying Department of Economics and Management, Lanzhou University of Technology, Lanzhou, Gansu, China [email protected]
Abstract. After more than a decade of rapid growth in the industrial economy, the energy and environmental systems are increasingly hindering the long-term stable development of the economic system, which forces local governments to optimize policy design and promote more coordinated development of the economy-energy-environment system. Based on the theory of system dynamics, this paper constructs the system dynamics model of shandong province’s economy-energy-environment system, and discusses the optimal path for the coordinated development of shandong province’s 3E system by means of Vensim software simulation. Keywords: 3E system development prediction Shandong province
System dynamics Simulation
1 Introduction Since the reform and opening up, my country’s economy and society have achieved rapid development, and the gross domestic product and residents’ income levels have greatly increased. However, energy and environmental problems brought about by economic development have gradually emerged, such as smog, groundwater pollution, and air quality degradation that are closely related to the public, and energy shortages and environmental pollution related to economic development have already contributed to sustainable economic growth. Produced a serious hindrance [1]. Therefore, to change the current imbalance of development, the key is to continue to optimize the specific implementation path according to the local development situation under the central policy to promote the coordinated development of the regional economyenergy-environment system. Shandong is a major economic province. Its economic aggregate ranks third in the country, second only to Guangdong and Jiangsu, but its GDP “gold content” ranks only 26th in the country [3]. There is a huge gap in energy production and consumption and a severe environmental situation. The rough development method of high energy consumption, high emission and high output value [4], how to transform the economic development of Shandong Province from “weight” to “heavy quality” is a key issue facing the economic and social development of Shandong Province. To solve this © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 31–37, 2021. https://doi.org/10.1007/978-981-33-4572-0_5
32
Y. Wang et al.
problem, the first task is how to promote the coordinated and sustainable development of Shandong’s economy-energy-environment system. Based on this, this article takes Shandong Province as an example, uses the system dynamics model to construct five scheme models, analyzes the problem of the coordinated development of the economyenergy-environment system of Shandong Province, and provides policy design references for the efficient and healthy development of the economy and society of Shandong Province.
2 Introduction to System Dynamics Modeling Method The method of System Dynamics (SD) was created by Professor Jay W. Forrester of Massachusetts Institute of Technology in the 1960s [5]. Through the qualitative and quantitative analysis of the causal relationship of indicators in the system, on this basis, the system flow diagram and system dynamics equations are established by combining the causal relationship of all elements and historical data, and finally a system simulation method for dynamic simulation of the system. After the simulation, the optimal policy plan can be obtained by adjusting the parameters and comparing the simulation results under different parameters, so that the system can be optimized and controlled in a targeted manner [6, 7]. The regional economy-energy-environment (3E) is a multi-dimensional complex system, which also determines that the use of system dynamics simulation to analyze the coordinated development of the 3E system has obvious advantages: First, the 3E system is composed of three subsystems: economy, energy, and environment. In the complex system composed of subsystems, there are linear or non-linear, unidirectional feedback or multiple feedback between subsystems, between subsystem elements and elements, so the characteristics of the composite system dynamics model; secondly, in the 3E system The three subsystems of economy, energy, and environment are constantly evolving. The direction of evolution may either progress or degenerate. The three may develop in coordination or out of balance. The system dynamics model can compare the internal structure of the entire system. The long-term development trend of a good simulation system; finally, the ultimate goal of 3E system development coordination analysis is to grasp which subsystems and which elements are lagging behind, and which are ahead of development, so that they can be adjusted in a targeted manner to improve the coordination of 3E system development. Therefore, this paper chooses the system dynamics model to study the improvement plan of the economic-energyenvironment system coordination degree of Shandong Province.
3 Model Building 3.1
Modeling Ideas
First, starting from the central and Shandong province’s policy documents and planning outlines for economy, energy, and environment, select the elements that best represent the economic subsystem, energy subsystem, and environmental subsystem,
Simulation and Prediction of 3E System in Shandong Province
33
and analyze the causal relationship among all the elements of the three subsystems. Influence relationship; secondly, draw causal relationship diagrams and system flow diagrams according to the relationship of elements, and analyze the feedback path from a qualitative perspective, and then calculate the equations between the elements based on historical data; again, on the basis of parameter settings, with the help of Vensim software performs simulation predictions; finally, after passing the test, by setting different simulation schemes and comparing and analyzing the results, the 3E system is determined Develop the best plan for coordination and propose countermeasures accordingly. 3.2
3E System Flow Diagram
The stock flow diagram is based on the causality diagram, using more variables to describe the specific logical relationship between system elements in detail, laying the foundation for subsequent simulations. On the basis of referring to existing research [8–10], the 3E system inventory flow diagram is constructed as shown in Fig. 1.
Added value of primary The increase in industry primary industry
Investment amount of environmental treatment Energy consumption per unit of GDP
GDP
Investment ratio of waste gas treatment
Proportion of primary industry Total energy consumption
Added value of secondary The increase of industry secondary industry Proportion of secondary industry
The increase of tertiary industry
Industrial added value
Investment amount of waste gas treatment
Investment amount of solid waste treatment Investment ratio of wastewater treatment
Amount of waste gas pollution
Added value of tertiary industry
Proportion of tertiary industry
Investment ratio of solid waste treatment
Quantity of solid waste storage Quantity of solid Quantity of solid waste treatment waste produced
Investment amount of wastewater treatment
gas pollution index Quantity of waste water stored Solid waste pollution index
The number of GDP growth
GDP growth affected by three wastes index
Wastewater pollution index
Quantity of sewage produced
Quantity of wastewater treatment
total population
The number of net population growth
Fig. 1. 3E system flow diagram of Shandong Province
The number of population growth
34
3.3
Y. Wang et al.
System Dynamics Equation Setting
Combined with the 3E system flow diagram in Fig. 1, simulation is performed with the help of Vensim system dynamics model modeling software. Before simulation, it is necessary to clarify the accurate measurement relationship between variables, that is, to obtain the measurement equations between variables based on the data of each variable in Shandong Province from 2005 to 2017. The data comes from the 2018 《Shandong Statistical Yearbook》.
4 Model Testing and Prediction Result Analysis In order to promote a more coordinated development of the economy-energyenvironment system, the government may introduce some economic and industrial policies, energy policies and environmental policies. In order to compare which of the different policies is more conducive to the coordinated development of the 3E system in Shandong Province, this section will use industrial structure, energy efficiency, and environmental protection investment structure as the control variables of the program design to simulate and predict the 3E system in Shandong Province under different programs. The development and evolution of There are five plans: 4.1
Maintain the Status Quo Model
Keep the basic value of each parameter unchanged, and carry out simulation simulation according to the existing development scale and trend of Shandong Province. The prediction results are shown in Table 1. The prediction result of Scheme 1 shows that in 2025, Shandong Province’s regional GDP will reach 11,136.8 billion yuan, the industrial added value during the same period will be 4,282.98 billion yuan, and the energy consumption will be 39,7887,000 tons of standard coal. Among the three waste pollution index, the exhaust gas pollution index will be 55.6. The wastewater pollution index is 139.8, and the solid waste pollution index is 101.7. This model needs to be further improved regardless of economic output, energy consumption or environmental pollution. 4.2
Economic Structure Adjustment Model
This plan adjusts the ratio of the primary industry structure, the secondary industry structure and the tertiary industry structure to 1:42:57 (5:55:40 in the plan one), and keeps the basic values of other parameters of the system unchanged. The industry surpasses the secondary industry, and the secondary and tertiary industries are absolutely dominated by the economic development scale. Based on the simulation, the second option is better than the first option in economic development, but the energy development and environmental development are regressed. The second option is still Room for improvement.
Simulation and Prediction of 3E System in Shandong Province
35
Table1. Simulation prediction results of Scheme 1 Years GDP (100 Million Yuan) 2017 2018 2019 2020 2021 2022 2023 2024 2025
4.3
74005 78382 83079 87944 92933 98018 103180 108405 113682
Energy consumption (10,000 tons of standard coal) 39444.6 39191.0 39047.1 39574.8 39961.2 40187.4 40240.2 40109.8 39788.7
Industrial added value (100 Million Yuan) 28862.4 30403.3 32056.8 33769.3 35525.8 37315.7 39132.9 40972.1 42829.8
Exhaust gas pollution index 56.584 56.393 56.222 56.249 56.229 56.159 56.032 55.844 55.589
Wastewater pollution index 131.525 132.756 133.914 135.010 136.051 137.044 137.991 138.898 139.768
Solid waste pollution index 70.449 80.631 86.105 90.007 93.098 95.686 97.928 99.913 101.700
Structural Adjustment Model of Environmental Protection Investment
Under the premise of maintaining the same economic and industrial structure, this plan will change the proportion of environmental protection governance structure, so that the proportions of waste gas treatment investment, waste water treatment investment and solid waste treatment investment will be adjusted to 55:39:6 (65 in Plan 1:30:5), according to the comprehensive treatment of waste gas, waste water, solid waste (increasing the investment ratio of waste gas and solid waste) investment environmental protection model simulation. The economic level of this program is higher than that of the first and second options, but the energy level is lower than that of the first and second options. At the same time, the environmental indicators have risen and fallen. Therefore, the third option needs to be improved. 4.4
Simultaneous Adjustment Model of Economic Structure and Environmental Investment
This plan combines the second and third options, that is, under the premise of keeping the energy system variables unchanged, the economic industrial structure and the environmental investment ratio structure are adjusted at the same time (the ratio of the primary, secondary and tertiary industrial structure is 1:42:57, waste gas, wastewater, solid waste, etc.) Governance investment ratio 55:39:6). Overall, the fourth option has improved compared with the second and third options, but it still needs to be improved. 4.5
Economic-Energy-Environment Coordinated Development Model
This plan comprehensively considers Option 2 and Option 3, so that the proportions of the added value of the primary industry, the secondary industry and the tertiary industry are 1:42:57 respectively. At the same time, the treatment of waste gas, treatment of
36
Y. Wang et al.
wastewater, and treatment of solid waste account for the total investment in environmental protection. The proportion of is 55:39:6. At the same time, the energy consumption per unit of GDP over the years will be adjusted at an average annual reduction rate of 0.1 from 2020 [energy consumption per unit of GDP = (2020, 0.440), (2021, 0.410), (2022, 0.380), (2023, 0.350), (2024, 0.320), (2025, 0.300)], the system simulation results in this mode are shown in Table 2. Comprehensive judgment, after considering the evolution direction of the three subsystems of economy, energy and environment, Shandong’s economy-energy-environment system has achieved more coordinated development. Table 2. Simulation prediction results of Scheme 5 Years GDP (100 Million Yuan) 2017 2018 2019 2020 2021 2022 2023 2024 2025
76722 81762 86756 91687 96522 101204 105575 109741 114324
Energy consumption (10,000 tons of standard coal) 40892.8 40881 40775.3 40342.3 39574 38457.5 36951.2 35117.1 34297.2
Industrial added value (100 Million Yuan) 30845.9 32708.9 34555.1 36377.6 38164.9 39895.9 41511.5 43051.5 44745.5
Exhaust gas pollution index 57.170 57.060 56.923 56.691 56.356 55.904 55.310 54.567 54.138
Wastewater pollution index 121.543 122.165 122.747 123.292 123.803 124.284 124.737 125.167 125.576
Solid waste pollution index 88.497 87.362 85.685 83.163 79.043 70.247 64.907 78.736 84.703
5 Conclusion On the basis of fully verifying the stability and effectiveness of the model, through the comparison of the five schemes, the analysis shows that: First, if the current policy environment is allowed to develop, Shandong Province will achieve economic growth in the future 3E system, while energy Consumption and environmental pollution will further intensify, and development coordination will not be effectively improved. Therefore, the government will need to intervene in policy; secondly, comprehensive policies should be applied to the economic and industrial structure, environmental investment direction, and energy use efficiency, without unilateral adjustments. Specifically, it is necessary to accelerate the development of the tertiary industry, increase the economic proportion of the tertiary industry, increase energy efficiency, increase investment in the treatment of solid waste and exhaust gas, and promote the coordinated development of the economy-energy-environment system through a threepronged approach.
Simulation and Prediction of 3E System in Shandong Province
37
References 1. Li, L., Hong, X., Wang, J., Xie, X.: Research on the coordinated development of the economic-energy-environment system coupling based on PLS and ESDA. Soft Sci. 32(11), 44–48 (2018) (in Chinese) 2. Su, J., Hu, Z., Tang, L.: The geographical spatial distribution and dynamic evolution of the coordination degree of my country’s energy-economy-environment (3E) system. Econ. Geogr. 33(9), 19–25 (2013) (in Chinese) 3. Zhang, H.: Research on SD modeling and simulation of the coordinated development of energy-economy-environment in Shandong Province. J. Chin. Univ. Petrol. 29(2), 5–9 (2013) (in Chinese) 4. Lu, J., Chang, H., Zhao, S., Xu, C.: The evolutionary characteristics of the coupling relationship between energy, economy and environment in Shandong Province. Econ. Geogr. 36(9), 42–48 (2016) (in Chinese) 5. Bernhard, J.A., Marios, C.A.: System dynamics modeling in supply chain management: research review. In: Proceedings of the 2000 Winter Simulation Conference, pp. 342–350 (2000) 6. Qin, H.: Research on China’s “Energy-environment-economy” system based on system dynamics modeling. Harbin Institute of Technology (2015). (in Chinese) 7. Du, J.: Modeling and simulation of Chengdu energy-environment-economy 3E system based on system dynamics method. Chengdu Univ. Technol. (2016). (in Chinese) 8. Zhou, L., Guan, D., Yang, H., Su, W.: System dynamics analysis of Chongqing’s economicresource-environmental development and simulation of different scenarios. J. Chongqing Normal Univ. 3, 59–67 (2015) (in Chinese) 9. Wang, F.: The temporal and spatial differences and trend prediction of the coordination degree of provincial energy-economy-environment (3E) system. J. Shanxi Univ. Finan. Econ. 38(6), 15–27 (2016) (in Chinese) 10. Hongwei, S., Pang, D.: Geographical spatial distribution and dynamic evolution of China’s economic-energy-environment system coordination level. Explor. Econ. Iss. 3, 1–9 (2017) (in Chinese)
Application of UAV 3D HD Photographic Model in High Slope (Highway) Kaiqiang Zhang1,2(&), Zhiguang Qin1,2, Guocai Zhang1,2, and Ying Sun1,2 1
2
CCCC Fourth Harbor Engineering Institute Co., Ltd., Guangzhou 510230, China [email protected] CCCC Key Laboratory of Environment Protection and Safety in Foundation Engineering of Transportation, Guangzhou 510230, China
Abstract. With the rapid development of science and technology, the new generation of aerial photography represented by UAV has been widely used in all walks of life. As an important part of expressway, slope maintenance inspection is always the key and difficult point of road inspection. UAV aerial survey technology has the obvious advantage of high geometric precision and high resolution, and its intervention solves the difficult problems faced by traditional highway survey. In this study, UAV is used for slope maintenance inspection, and the characteristics of UAV, the environment of expressway slope and the problems existing in inspection are analyzed. Based on the current working mode of manual image analysis, the quality of the inspection task is controlled from two aspects of shooting and flight, so as to meet the requirements of the inspectors to interpret the slope state. The research also considers the characteristics of UAV and different types of slope, and selects the algorithm suitable for the slope secondary inspection path planning. The research results of this paper confirm that UAV can effectively solve the problems in image resolution, geometric accuracy of ground objects and flight cost of aerial photogrammetry of large aircraft, and greatly improve the survey efficiency. Keywords: Highway survey maintenance inspection
Aerial photography High resolution Slope
1 Introduction Driven by the national economic development situation, the total length of China’s expressways is also increasing, and the number of road slopes of various sizes is also gradually increasing [1]. It has always been a major problem to maintain roads in some environments with relatively harsh terrain. A slight mistake may lead to natural disasters such as landslides and falling rocks, which may threaten the safety of people’s lives and property [2]. Therefore, as an important part of expressways in some special terrain areas, slope maintenance inspection has always been the focus and difficulty of road inspection [3].
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 38–44, 2021. https://doi.org/10.1007/978-981-33-4572-0_6
Application of UAV 3D HD Photographic Model in High Slope (Highway)
39
Traditional manual highway inspection in a harsh environment not only has great safety risks and operation difficulties, but also has unavoidable disadvantages such as high cost, delayed inspection feedback and inaccurate information [4]. In the context of the rapid development of aviation industry, the function and performance of UAV are increasingly improved. In existing practices, UAV has been widely used in communications, oil, electricity and other industries, greatly enhancing the efficiency and development speed of the industry [5]. UAV low-altitude aerial survey can effectively make up for various significant deficiencies existing in traditional aerial photogrammetry, and its application in highway survey can effectively improve survey efficiency [6]. In this study, combined with engineering examples of highway survey, the content and requirements of highway slope maintenance inspection are clarified, and the constraint conditions of UAV slope inspection are determined from the aspects of UAV ’s own performance, characteristics of external environment, and inspection task requirements, etc. [7, 8]. In this study, a section of slope in our province was taken as the object, the slope was divided into step-like and non-step-like according to the inspection needs, its characteristics were analyzed and different aerial photography methods were adopted to formulate the maintenance and inspection scheme of UAV slope, and an algorithm suitable for the planning of secondary inspection path of slope was selected [9]. The results show that the scheme is feasible and can improve work efficiency. Therefore, the application of UAV in slope maintenance inspection has certain practical significance and reference value [10].
2 Research Overview 2.1
Overview of UAV Technology and Its Applications
With the development of photography technology, 5G, Beidou satellite system, Internet of Things and other emerging technologies, aerial eagle Eye UAV is becoming the “fourth piece” besides the “three big pieces” of geological compass, geological hammer and magnifying glass. Unmanned aerial vehicle (UAV) is a flexible, light and non-topographical remote sensing data acquisition platform, which has been applied to geological disaster investigation, safety monitoring and other work. Unmanned aerial vehicles (UAV’s) can make up for the defects in artificial field investigation, such as the lack of access to geological hazard areas, incomplete field investigation and observation, low resolution of satellite image data, and long time for disaster judgment based on comprehensive analysis. The UAV is equipped with satellite positioning system and HIGHDEFINITION camera to quickly and omni-directional image data acquisition of the target area of ground objects according to the agreed trajectory. Intelligent data processing software is used to quickly splicing images, airtriple encryption, and finally to build the digital orthoscopic image and THREE-DIMENSIONAL model of the target area. With the development of uav imaging and modeling technology, UAV has become one of the important tools in the geological industry.
40
2.2
K. Zhang et al.
Scheme Design Principles of UAV for Slope Inspection
In formulating the maintenance and inspection scheme of UAV in slope, the principles of practicality, economy, feasibility and safety should be met. Practicality means that the inspection scheme designed can assist the inspectors to complete the daily maintenance inspection of the expressway slope more efficiently, more conveniently and more safely, which has practical application significance. Economy mainly refers to whether the relevant resources are reasonably used in the process of resource input and use to reduce as much as possible the resources consumed when obtaining a certain quantity and quality of results. Feasibility means that the inspection scheme designed can be implemented smoothly, which means it is operable. Based on the reality, the function of UAV can be fully considered at this stage, combined with the environmental situation of the highway slope, the characteristics of the slope inspection content and way, to carry out the inspection scheme design in line with the actual situation. The safety of UAV in slope inspection mainly includes two aspects: first, the safe operation of expressway, to ensure that the slope inspection work of UAV will not affect the safety of vehicles on the road; the second is to ensure the safety of the UAV itself. The UAV can complete the mission safely according to the designed scheme, and will not be unable to return due to power depletion or damage caused by obstacles, etc.
3 Work Flow and Example Verification of UAV Aerial Survey 3.1
Working Process of UAV Aerial Measurement
3.1.1 Judge the Weather Conditions The requirements for meteorological conditions of UAV are very strict. It is necessary to observe the light, cloud thickness, visibility and other indicators to determine whether it meets the conditions of aerial photography. If not up to the standard of aerial photography, do not rush to sail. 3.1.2 Make Preparations When the weather meets the flight requirements, you can bring relevant equipment to the scheduled departure point for aerial photography preparation. The place of departure must be flat and free from any radio or other signal interference. It is also necessary to check whether all parts of the UAV have been connected in good condition, whether the power is sufficient, whether the signal can be connected, etc., to ensure the smooth flight. 3.1.3 Record the Log of Aerial Survey Landing coordinates, weather, wind speed and other information during aerial survey should be fully recorded for later data analysis.
Application of UAV 3D HD Photographic Model in High Slope (Highway)
41
3.1.4 Adjust the UAV Angle at Any Time When it is more than 30 km from the last take-off point, the Angle and attitude of the uav need to be adjusted before the aerial survey to ensure the stable transmission of communication. 3.1.5 Manual Remote Control Test Before taking off, the UAV also needs to carry out necessary manual remote control test to test whether the tail fin, fuselage and nose can carry out normal command operation, and it can also adopt remote control mode to control the TAKE-OFF and landing of the UAV in special circumstances. 3.2
Case Verification
The feasibility of UAV inspection scheme is verified by taking a high speed slope section in our province as the experimental object. The aerial experiment was carried out in cloudy weather at 33° latitude and 400 m above sea level. Due to the dynamic relationship between the focal length and the Angle of view, the Angle of view will narrow when the focal length is longer, and the Angle of view will be shorter when the focal length is shorter, so the focal length is 14 mm in this experiment. According to the relationship between ground resolution and aerial shooting height, the GSD setting Table 1. GSD setting corresponding to camera parameter setting and shooting height Operating period Aerial object
Light projection direction Camera parameters 14:00–16:00 Intercepting ditch Side down light f/8, shutter speed 1/125 09:00–11:00 Slope Frontlighting f/7.1, shutter speed 1/125 Aerial height (m) 42.6 32.7 20.3 12.5 6.4 GSD (cm) 1 0.78 0.52 0.29 0.16
corresponding to camera parameter setting and shooting height is shown in Table 1. According to Table 1, theoretically, the camera mounted on the UAV can distinguish objects with a width of 1 cm or above when the distance is 41.2 m, so when the UAV is 6.2 m, the camera mounted on the UAV can distinguish objects with a width of 1.5 mm or above.
4 Effect Analysis of Aerial Photography Experiment 4.1
Aerial Photography Effect
In this experiment, the emergency lane of the expressway is used as the platform for the takeoff and landing of the UAV. As the slope is step-shaped and has a platform gully, the medium step-shaped slope is used to take aerial photographs of the gully and slope
42
K. Zhang et al.
separately. When collecting the slope surface, adjust the UAV ’s head to −45°. When collecting the intercepting ditch, vertical direct aerial photography is adopted. The overall aerial photography of the slope is carried out. The corresponding number of images collected for each route is shown in Table 2 and Fig. 1. Table 2. Statistics of the number of images collected in the experiment Side slope length (m) 263 248 221 173 118
Number of images (sheet) 13 11 11 8 5
Intercepting ditch length (m) 237 203 183 134 106
Number of images (sheet) 15 13 12 8 6
300 250 200 150 100
263
248
237
221
13 11
16
15 203
11 173
14
13 183
12
12
10 134 8
8 118 5
106
8 6 6 4
50 0
2
Search type
Side slope length/Intercepting ditch length
0
Number of Images (sheet)
Length of slope or cut ditch(m)
Statistics of the number of images collected in the experiment
m
Number of Images (sheet)
Fig. 1. Statistics of the number of images collected in the experiment
The whole slope took 18 min and 85 photos were collected. It takes 38 min for the inspectors to inspect the slope on foot, and 4 min for them to fill in the slope observation record form, so it takes 42 min. The collected images will be copied to the computer and interpreted by the inspectors, which can clearly determine that the THREE-DIMENSIONAL presentation is of good quality, which can meet their requirements for interpretation of the slope and master the slope state. According to the experimental results, the inspection scheme proposed in this paper can be used to assist the inspectors to realize the slope inspection. Compared with manual inspection, the work efficiency has been improved by more than 40%. At the
Application of UAV 3D HD Photographic Model in High Slope (Highway)
43
same time, inspectors said they had significantly reduced the intensity of their uphill work; Matching the pile number and longitude and latitude of each side of the slope one to one to avoid confusion or forgetting, not only can the follow-up daily inspection records of the side slope be transferred from the outdoor to the indoor, but also is conducive to the compilation of files and information sharing of the inspection records of the side slope. 4.2
Apply the Optimization Strategy
For highway slope in daily operation and maintenance easily affected by many unstable factors, traffic safety and the stability of the road itself is affected by the unfavorable factors, so in addition to the necessary terrain protection design measures, also need daily maintenance were inspected according to the slope found real-time protection to highway slope engineering. First is to strengthen the inspection and protection of the branch structure, through the slope reinforcement to consolidate its stability. Because this scheme is used to improve the slope rate for unstable slope, it is necessary to carry out key inspections for gravity type, cantilever type and anchor type structures. The second is the inspection of drainage facilities. Because drainage facilities are usually set in the middle of the slope, at the foot of the slope or on the platform at the top of the slope, and drainage ditches and intercepting ditches are usually adopted on the slope surface, etc., the inspection should focus on the observation of whether the drainage facilities are damaged and whether the culvert connection is unimpeded. The third is the inspection of slope protection, which mainly includes three types of protective structures, such as masonry, anchor spray and flexible. Because these structures are usually used in slopes with high stability, local deformation due to weathering must be prevented after slope excavation. Expressway slope is mainly divided into artificial slope and natural slope two kinds, but because of the relatively large range of human engineering activities, and these slopes have the characteristics of sudden relatively strong, so the daily maintenance of the slope inspection has very high requirements. First, different inspection periods should be adopted according to different types of slope diseases or different harm degrees. For example, in the plum rain season, the frequency of inspection should be increased on the basis of daily inspection, reflecting the classification of inspection according to the different characteristics of the slope of the highway in different regions. Second, for the requirements of the patrol scope, usually requires that the distance is not less than 20 m within the cutting drainage ditch, and all the way to the slope top outside the width of both sides of the trench.
5 Conclusion The total mileage of China’s expressways is consistent with the economic aggregate of China, and both show a significant increasing trend. On the other hand, the number of motor vehicles in Our country has shown a relatively large increase, and the traffic flow on expressways is getting larger and larger, which brings more uncertain factors to the daily maintenance and safety patrol of expressways. Unmanned aerial vehicle (UAV),
44
K. Zhang et al.
as a kind of low cost, high measuring precision, not restricted by factors such as topography and meteorological and geographical conditions of new high-definition photography tools, to provide information more quickly, meet the requirements of daily maintenance inspections of highway slope, reduce the risk of artificial patrolling and working intensity, has the very broad market space. In this study, according to the change of light intensity, the exposure estimation method was used to adjust the camera during the slope inspection. According to the inspection needs, the slope was divided into step-like and non-step-like, and different aerial photography methods were adopted to analyze their respective characteristics. The experiment shows that UAV can completely meet the requirements and needs of daily maintenance and inspection of highway slope, improve work efficiency, and reduce the work intensity and danger brought by climbing slope. It is believed that UAV can play a better role in highway survey in the future with more and more developed aerial survey technology. Acknowledgements. This research was funded by the Science and Technology Project of CCCC Fourth Harbor Engineering Co., Ltd (No. 2019-A-06-I-11).
References 1. Greene, D.: Drone vision. Surveill. Soc. 13(2), 233–249 (2016) 2. Dering, G.M., Micklethwaite, S., Thiele, S.T., et al.: Review of drones, photogrammetry and emerging sensor technology for the study of dykes: Best practises and future potential. J. Volcanol. Geoth. Res. 373, 148–166 (2019) 3. Baek, S., Hong, W., Choi, Y.: A study on construction of analysis model for building view environment using UAV. J. Theoret. Appl. Inf. Technol. 96(3), 712–721 (2018) 4. Nelson, J.R., Grubesic, T.H., Wallace, D., et al.: The view from above: a survey of the public’s perception of unmanned aerial vehicles and privacy [J]. J. Urban Technol. 26(1), 83–105 (2019) 5. Yijun, F., Qichao, Z.: Research on the key technology of survey measurement image based on UAV. Electron. Meas. Technol. 3(2), 99–103 (2018) 6. Erdelj, M., Natalizio, E., Chowdhury, K.R., et al.: Help from the sky: leveraging UAVs for disaster management. IEEE Pervasive Comput. 16(1), 24–32 (2017) 7. Telmo, A., Joná, H., Luís, P., et al.: Hyperspectral Imaging: A Review on UAV-Based Sensors, Data Processing and Applications for Agriculture and Forestr. Remote Sens. 9(11), 1110 (2017) 8. Khuwaja, A.A., Chen, Y., Zhao, N., et al.: A survey of channel modeling for UAV communications. IEEE Commun. Surv. Tutorials PP(4), 2804–2821 (2018) 9. Luo, Q., Hu, M., Zhao, Z., et al.: Design and experiments of X-type artificial control targets for a UAV-LiDAR system. Int. J. Remote Sens. 41(9), 3307–3321 (2020) 10. Shakeri, R., Al-Garadi, M.A., Badawy, A., et al.: Design challenges of multi-UAV systems in cyber-physical applications: a comprehensive survey and future directions. IEEE Commun. Surv. Tutorials 21(4), 3340–3385 (2019)
Design of Foot Cam Vibration Damping System for Forest Walking Robot Jing Yin(&) East University of Heilongjiang, Harbin 150066, Heilongjiang, China [email protected]
Abstract. With the rapid economic development, the degree of automation of processing machinery is getting higher and higher. People gradually began to attach importance to the level of mechanization and automation of forestry operations to improve the efficiency of forestry operations. However, my country’s forest terrain conditions and operating environment are more complex. The originally developed wheeled or crawler walking robots have many drawbacks. Although crawler walking machines are excellent in terrain adaptability, they have poor obstacle crossing capabilities. Therefore, there is an urgent need to design a new type of walking robot that can adapt to the complex ground environment in the forest to meet market demand. This article combines inertial positioning and visual positioning to reduce the impact of carrier jitter on positioning. Third, the three-dimensional grid map is constructed by using the octree model, which effectively reduces the consumption of storage resources. Finally, a path planning problem based on an improved A* algorithm is studied, including heuristic function design, dynamic weighted evaluation function design, and path smoothing. In this paper, the overall design of the mobile robot is carried out. The principle analysis of the cam damping mechanism is also carried out. The ANSYS finite element analysis and nonlinear calculation of the cam during the movement are carried out. The dangerous area and the deformation displacement of the cam mechanism are obtained. Pedestrian machinery has laid a theoretical foundation for stable walking in the forest. Research shows that the algorithm of this paper increases the length of the path and the number of turns when the scale of the forest map is large, but the search time of the algorithm increases by only 5–11 ms compared with the A* algorithm. The algorithm can take into account both efficiency and path length. Keywords: Forest walking robot Shock absorption mechanism vibration reduction A* algorithm
Cam
1 Introduction Walking machine is a multi-disciplinary comprehensive research topic, which involves mechanics, kinematics, dynamics, environment recognition and harmonic control, etc. It is an important research direction in the field of robotics today. The footing point of the walking robot is discrete, and the optimal support point can be selected on the achievable ground. Even when the surface is extremely regular, by strictly selecting the support point of the foot, it can walk freely. In the complex ground environment © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 45–51, 2021. https://doi.org/10.1007/978-981-33-4572-0_7
46
J. Yin
between forests, walking robots are used to complete some work, which has unparalleled advantages of wheeled and tracked movement. The uneven ground environment in the forest makes the walking robot have a lot of room for development in the design of shock absorption system and energy efficiency. The walking mechanism of the future forestry walking robot should combine the principles of bionics to develop a walking mechanism with feet or insects or a compound crawler with automatic walking. Through the three-dimensional vision sensor to grasp the shape of the object, while predicting, scanning, planting, cutting and other operations. At the same time, to understand the position and altitude of the robot group, satellite navigation can be used to centrally control the computer from the instructions away from the job site, and at the same time can also grasp the operation status based on these data. The future forestry walking robot is a high-tech group with group or group form, which can be controlled uniformly, and it cannot be ignored for the transformation and development of the entire society. This paper considers the working environment of the scooter, analyzes the design requirements of the scooter, analyzes the overall production process of the scooter to be studied, determines the design plan of the machine, and carries out the overall design of the scooter from five parts. This article combines the domestic and foreign examples of applying wooden shell technology to the appearance shell of the machine, and applies the wooden shell technology to the shell of the walker. The strength of the wooden shell is analyzed with mechanical theoretical equations to provide an optimized basis for the production of the prototype shell. This paper improves the planning efficiency of the A* algorithm; on this basis, the use of dynamically weighted evaluation functions allows the algorithm to balance speed and accuracy. At the same time, the path is smoothed by removing redundant points to ensure that the algorithm plans to meet the requirements of the two-round self-balancing robot motion control. The experimental results verify the effectiveness of the improved A* algorithm designed in this paper.
2 Design of Foot Cam Vibration Damping System for Forest Walking Robot 2.1
Structure of the Forest Walk Robot
(1) Power mechanism The power mechanism is mainly composed of a 5 kW DC48-5-15 DC servo motor and battery, which mainly provides power for the scooter. The power of the motor is transmitted to the main shaft through the chain drive. The main shaft drives the crank movement through the key connection. The rear crank completes the parallel connection of the front and rear legs through a parallelogram mechanism. The front and rear cranks drive the legs of the walking machine to start walking [1, 2]. The transmission mode of the reducer on the motor is chain transmission, the reduction ratio is 9, and the number of gears is 0, 1, −1. The operator manually starts the scooter, and the power is transmitted from the battery box to the small sprocket and the large sprocket through the motor, and then into the crankshaft. The front and rear crankshafts are
Design of Foot Cam Vibration Damping System for Forest Walking Robot
47
connected in parallel through a parallelogram mechanism. The crankshaft drives the legs of the walking machine to make circular movements and walks during the movement. The machine’s feet and the ground generate driving friction, which causes the walking robot to move forward [3, 4]. (2) Shock absorption mechanism The damping mechanism is mainly composed of a cam and a guide bar system. The cam and the roller are respectively four groups, which are symmetrically distributed on both sides of the scooter. The cam and the roller cooperate to rely on the cam’s own curve to compensate for the difference in the vertical displacement of the fuselage. Maintain a constant distance between the driving plane and the ground to maintain the stability of the whole machine during walking. The distance between the center of the camshaft and the center of the crankshaft is 100 mm, and the distance between the centers of the two cams on the same side is 300 mm. Every time the crankshaft drives the cam to make one revolution, the scooter walks three steps. The cam compensates the vertical direction of the walking mechanism during the entire process. For the displacement generated on the vehicle, the compensated displacement is exactly opposite to the displacement produced by the support frame during walking, ensuring the stability of the driving plane [5, 6]. The guide rod is a vertical guide rail composed of a guide rod, a guide rod sleeve and a positioning pin, which are respectively located at the four corner points of the frame. The guide rail and the positioning pin ensure the position of the driving plane and compensate the displacement in the vertical direction. 2.2
Path Planning Algorithm Design of Two-Round Self-balancing Robot
(1) A* algorithm In the A* algorithm, the evaluation function plays a vital role. The common evaluation functions are: f ð nÞ ¼ gð nÞ þ hð nÞ
ð1Þ
In formula (1): n is the current node, h(n) is the estimated cost from the starting node to the target node via n nodes; g(n) is the actual cost of the starting node to reach n nodes; h(n) is the section The estimated cost from n points to the target node is the heuristic function, which plays a leading role in the evaluation function [7, 8]. (2) Design of weighted evaluation function The search speed of the A* algorithm is improved by weighting the evaluation function in the A* algorithm. In view of the fact that the fixed weighting method cannot guarantee the optimal path length and efficiency, in order to balance path length and efficiency, a dynamic weighted evaluation function is designed in this section, as shown in Eq. (2).
48
J. Yin
f ðnÞ ¼ ð1 aÞgðnÞ þ ahðnÞ
ð2Þ
The dynamic weighted evaluation function dynamically adjusts the weight according to the distance from the target point, that is, according to the environmental information of different grids in the environment map, the weighting factor is used to adjust the ratio of h(n) and g(n) in the evaluation function to generate different Heuristic function to guide the search. The weight of the heuristic search is adjusted by changing the size of the weight D. When the value of D takes a larger value, the heuristic function in the evaluation function occupies a larger proportion, the algorithm prefers the breadth-first search feature, the search space will be larger, and the search efficiency is not high, but the optimal path can be obtained [9, 10]; When the value of D is smaller, the algorithm prefers the depth-first search feature, the search space will be greatly reduced, and the efficiency is very high, but the path obtained by the search is not optimal. D is defined as: a¼
dn a 2 ð0; 1Þ d
ð3Þ
In Eq. (3), n d is the Euclidean distance between the current point and the target point, and d is the estimated path value between the starting point and the target point.
3 Experimental Design of Foot Cam Vibration Damping System for Forest Walking Robot 3.1
Two-Round Self-balancing Robot Positioning Experiment
In order to verify the effectiveness of the algorithm designed in this paper, two sets of experiments were conducted in two different typical forest environments. The first set of experiments is a two-wheeled self-balancing robot performing linear motions in a dense forest with obstacles in the environment. The moving distance is about 4.2 m. The second set of experiments was conducted in a relatively open area between forests. Two rounds of self-balancing robots walked a 4 2.5 m2 rectangle in the forest. 3.2
Simulation Experiment
Suppose that the two-round self-balancing robot works in an indoor environment, where the distribution of obstacles is random, and the position of the obstacles is determined. The grid created based on known information is calculated as follows, the initial conditions are set as follows: grid size is 10 cm, starting point (4,4), target point (27,27), obstacles account for 35% of the map, and D = 10 in the heuristic function. Simulation experiments were carried out based on the Matlab operating environment, and the experimental results verified the effectiveness of the algorithm.
Design of Foot Cam Vibration Damping System for Forest Walking Robot
49
4 Experimental Design and Analysis of Foot Cam Vibration Damping System for Forest Walking Robot 4.1
Motion Simulation Results of Double-Head Cam Deceleration Mechanism
In the area with dense obstacles, Dijkstra’s algorithm path is not smooth enough, and there are many turns in a small range; the A* algorithm plans to get too many path turns, and the path smoothness is not high; the fixed weighted A* algorithm turns too many times, the path The smoothness is poor, and there are too many turns in the area where the obstacles are dense, which is not conducive to the operation of the twowheeled self-balancing robot; the path searched by the improved A* algorithm in this paper is relatively smooth, and the relative environment is selected in the second half of the path. The open space saves the time cost of two rounds of self-balancing robot movement. In order to more fully understand the optimization effect of the improved A* in this paper, the algorithm is analyzed in terms of the number of nodes, the number of turns, the path length, and the running time of the algorithm. Its performance is shown in Table 1. Table 1. Comparison table of algorithm simulation results Algorithm name Dijkstra algorithm Weighted A* algorithm a = 0.65 Weighted A* algorithm a = 0.35 Algorithm
Number of expansion nodes 578 469
Turns/time 10 16
Path length/cm 387 397
Algorithm time/ms 154 72
532
13
379
103
145
8
409
17
It can be seen from Table 1 that Dijkstra’s algorithm search process expanded 578 nodes, the number of turns in the obtained path is 10, the length of the path is 387 cm, and the algorithm takes 154 ms. During the A* algorithm search process, 372 nodes are expanded, the number of turns in the obtained path is 12 times, the length of the path is 393 cm, and the algorithm takes 63 ms. In this paper, the improved A* algorithm extends 142 nodes, which is 75.9% and 61.8% less than the number of expanded nodes of Dijkstra algorithm and A* algorithm. The planned path length is increased by 16 cm compared to Dijkstra algorithm and A* algorithm, respectively. Compared with 13 cm, the length of the path is similar to the shortest length obtained by Dijkstra’s algorithm, and the number of unnecessary turns in the path is reduced by 2 and 3 times compared to Dijkstra’s algorithm and A* algorithm, respectively, to avoid two rounds of selfbalancing The movement overhead and control difficulty caused by the unreasonable turning of the robot. The improved A* algorithm in this paper effectively improves the
50
J. Yin
search efficiency of the A* algorithm, reduces the number of node expansions, reduces the number of turns, and the path is smoother. Considering the situation of different scale forest environment maps, the performance of the A* algorithm is compared with the improved A* algorithm in this paper. The simulation was carried out on a grid map with a map size of 2 2 m2, 3 3 m2, 4 4 m2, and an obstacle rate of 35%, and the number of experimental groups was 50. From the search node number, the number of turns, the length of the path and the average value of the algorithm time-consuming analysis, the experimental results are shown in Fig. 1.
Fig. 1. Performance comparison of map algorithms of different sizes
As the scale of the environment map becomes larger, the number of search nodes, the average number of turns, and the average time consumption of the algorithm for both algorithms increase. As the scale of the map increases, the working environment of the two-round self-balancing robot becomes complicated. However, compared with the results of the A* algorithm, the length of the path planned by the improved A* algorithm in this paper is increased by 10–14 cm, and the number of turns is increased by 2–3. In the algorithm of this paper, when the scale of the map is large, although the length of the path and the number of turns have increased, the search time of the algorithm has increased by only 5–11 ms compared with the A* algorithm. The algorithm of this paper can take into account both efficiency and Path length.
Design of Foot Cam Vibration Damping System for Forest Walking Robot
51
5 Conclusions In this paper, in order to solve the problem of path planning for two-round selfbalancing robots, the path planned by the traditional algorithm has too many turns, the path is not smooth enough, and the algorithm efficiency is poor when the environment changes greatly, and an improved A* algorithm is proposed. A new heuristic function is designed, which improves the efficiency of the two-round self-balancing robot path planning; on this basis, the use of dynamically weighted evaluation functions allows the algorithm to balance speed and accuracy. At the same time, the path is smoothed by removing redundant points to ensure that the algorithm plans to meet the requirements of the two-round self-balancing robot motion control. Through simulation experiments, the experimental results show that the improved A* algorithm reduces the path cost and the number of turns, and improves the speed and smoothness of the algorithm.
References 1. Morita, K., Wakui, S.: Study of an optimization design method about vibration damping for an electron beam system. J. Jpn. Soc. Precis. Eng. 82(6), 583–588 (2016) 2. Gierlak, P., Szybicki, D., Kurc, K., et al.: Design and dynamic testing of a roller coaster running wheel with a passive vibration damping system. J. VibroEng. 20(2), 1129–1143 (2018) 3. Han, S.R., Cho, J.R.: Investigation of vibration damping characteristics of automotive air conditioning pipeline systems. Int. J. Precis. Eng. Manuf. 17(2), 209–215 (2016) 4. Hajipour, V., Farahani, R.Z., Fattahi, P.: Bi-objective vibration damping optimization for congested location-pricing problem. Comput. Oper. Res. 70, 87–100 (2016) 5. Tapia, J.C., Silva Lomeli, J.D.J., Fonseca Ruiz, L., et al.: Design of a mechatronic system for fault detection in a rotor under misalignment and unbalance. IEEE Latin Am. Trans. 13(6), 1899–1906 (2015) 6. Yang, J.F., Xu, Z.B., Wu, Q.W., et al.: Design of six dimensional vibration isolation system for space optical payload. Opt. Precis. Eng. 23(5), 1347–1357 (2015) 7. Cárdenas, R.A., Viramontes, F.C., Toledo, A.S.: Vibration Reduction performance of an active damping control system for a scaled system of a cable-stayed bridge. Int. J. Struct. Stab. Dyn. 15(05), 1450077 (2015) 8. Hermsdorf, G.L., Szilagyi, S.A., Rösch, S., et al.: High performance passive vibration isolation system for optical tables using six-degree-of-freedom viscous damping combined with steel springs. Rev. Sci. Inst. 90(1), 015113 (2019) 9. Muhammad, B.B., Wan, M., Feng, J., et al.: Dynamic damping of machining vibration: a review. Int. J. Adv. Manuf. Technol. 89(9–12), 2935–2952 (2017) 10. Huang, Z., Hua, X., Chen, Z., et al.: Performance evaluation of inerter-based damping devices for structural vibration control of stay cables. Smart Struct. Syst. 23(6), 615–626 (2019)
Semantic Segmentation of Open Pit Mining Area Based on Remote Sensing Shallow Features and Deep Learning Hongbin Xie1,2, Yongzhuo Pan1,2(&), Jinhua Luan1,2, Xue Yang1,2, and Yawen Xi1,2
2
1 Chongqing Key Laboratory of Exogenic Mineralization and Mine Environment, Chongqing Institute of Geology and Mineral Resources, Yubei 401120, Chongqing, China [email protected] Chongqing Research Center of State Key Laboratory of Coal Resources and Safe Mining, Yubei 401120, Chongqing, China
Abstract. Mineral resources are an important part of natural resources, and a good order of mining activities is an important prerequisite to ensure the safety of mining production, maintain the fairness of mining market, and steadily promote the construction of ecological civilization. Optical remote sensing image is one of the main carriers to reflect the mining activities of open-pit mining area. Deep learning technology is widely used in semantic segmentation of open-pit mining area. However, due to the complex surface environment of mining area, its segmentation accuracy needs to be further improved. In this paper, taking gaofen-2 optical remote sensing image as the data source, the remote sensing image sample set of open-pit mining area is constructed by manual annotation. Based on the sample set, the shallow texture features of the image are constructed, and part of the sample sets are put into the deep neural network for training. Combining the shallow texture features with the deep features of the deep neural network, a semantic segmentation model for pixel level open-pit mining area extraction is proposed by using U-net analysis model, and compared with the other two methods. The experimental results show that the overall accuracy of this method is 89.3%, and the average accuracy is 88.78%, which are better than the other two methods. Keywords: Remote sensing shallow features segmentation Remote sensing image
Deep learning Semantic
1 Introduction We know that mineral resources are an important energy source, and the monitoring of mining environment is very important. At present, remote sensing image has become the best choice for mining area environmental monitoring data source due to its rich spatial information and texture information, high resolution and large amount of data [1, 2]. With the development of science and technology, deep learning technology has been widely used in the semantic segmentation of open-pit mining area. However, © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 52–59, 2021. https://doi.org/10.1007/978-981-33-4572-0_8
Semantic Segmentation of Open Pit Mining Area
53
because the surface of mining area is extremely complex, the traditional remote sensing image ignores the texture features, structural levels and other features of the image, with low accuracy, which can no longer meet the requirements [3, 4]. Remote sensing shallow feature is the most basic feature to describe the image, the extraction method is simple and the complexity is low [5, 6]. Remote sensing shallow features include spectral features, texture features, and structural features, SIFT features, etc., which are usually combined to optimize the classification results [7, 8]. The combination of deep learning semantic segmentation and remote sensing shallow features can effectively improve the accuracy of semantic segmentation in open-pit mining area. The research on it is of great significance to the monitoring of mining activities in open-pit mining areas [9, 10]. In order to improve the classification accuracy of high-resolution remote sensing image of open-pit mining area, and enhance the practicability and reliability of classification results. In this paper, combining the remote sensing shallow features with deep learning semantic segmentation algorithm, using U-net analysis model, a semantic segmentation model for pixel level open-pit mining area extraction is proposed, which further improves the segmentation accuracy. In the research process, we use the manually labeled open-pit mine image as the sample set, and take the gaofen-2 optical remote sensing image as the data source, and compare with the other two methods, and verify the reliability of the method through experiments.
2 Remote Sensing Shallow Features and Deep Learning Semantic Segmentation Model 2.1
Shallow Remote Sensing Features
Spectral features and texture features are selected for shallow remote sensing features. In order to highlight the advantages of less calculation and low extraction difficulty of shallow features, two basic statistics are selected for spectral features: band mean and band standard deviation. Texture features are obtained by calculating gray level cooccurrence matrix of images, and contrast (CT), correlation (CR), energy (E) and homogeneity (H) are selected. The shallow feature vector f is defined as: F ¼ ðl; r; CT; E; H Þ
ð1Þ
Among them, the relevant calculation formula of each variable is as follows: l¼
N 1X pði; jÞ N i;j
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u N u 1 X pði; jÞ lj2 r¼t N 1 i;j
ð2Þ
ð3Þ
54
H. Xie et al.
E¼
X
fpði; jÞg2
ð4Þ
1 pði; jÞ 1 þ ji jj
ð5Þ
i;j
H¼
X i;j
2.2
Deep Learning Semantic Segmentation Model
The deep learning semantic segmentation model selected in this paper is the classical semantic segmentation full convolution network u-net analysis. U-net can capture details more accurately than segnet. It is one of the classic full convolution semantic segmentation neural networks. It is based on the encoder and decoder architecture, and the network module is clear. In the U-net encoder part, the network contains nine convolution layers, which are separated by four maximum pooling down sampling layers. The convolution layer extracts high-dimensional visual features from the input image, maintains the spatial size of each output feature map in the calculation process, and the pooling layer does down sampling for the input feature map, so that the spatial resolution of the output feature map is lower and the feature information is more global, However, down sampling is at the cost of losing accurate location information about the target, which makes the semantic segmentation of unordered and irregular size trees from remote sensing images with complex map features.
3 Sample Description and Experimental Design (1) Sample description This paper selects 680 sets of samples, which are composed of image and vector. 45 sets of them are selected as reference samples. In addition, the image is gf2 remote sensing image in TIF format, with resolution of 0.8 m and depth of 8 bits; including four channels of R, G, B and NIR (near infrared). The file with SHP vector format can be opened in ArcGIS, and the display range is the mask of mining area annotation. Part of the image is shown in Fig. 1.
(2) Experimental design In this paper, part of the sample set is put into the deep neural network training, and the semantic segmentation model proposed in this paper is tested. In addition, this paper divides the samples into four types, compares the accuracy of each category, and compares the overall accuracy, average accuracy and average intersection ratio, and selects the other two methods to compare with the method in this paper, so as to highlight the advantages of this method.
Semantic Segmentation of Open Pit Mining Area
55
Fig. 1. Sample image
4 Semantic Segmentation Test and Analysis of Open-Pit Mining Area Based on Remote Sensing Shallow Features and Deep Learning In this paper, deep convolution neural network is used in semantic segmentation task to improve the accuracy of image semantic segmentation. In the deep learning semantic segmentation, the full connection layer of traditional CNN network is replaced by convolution layer, so the input size can be arbitrary. Deep learning semantic segmentation uses deconvolution layer to up sample the feature map of the last convolution layer, so that it can be restored to the original size. Therefore, all pixels can be predicted from the abstract features. The channel number n of the last convolution layer is determined by the number of required classifications. In order to make the results of up sampling more precise, we also take the response of shallow and middle layers into account. In the process of sampling, after the input image passes through five pooling layers, the size of the feature map is reduced to 1/32 of the original, and the output is more accurate. The 10 convolution layers in the decoder are divided into five groups by four times of deconvolution. Each convolution integrates the output feature information from the module corresponding to the abstract level of the encoder through bypass connection, and cooperates with the deconvolution layer to gradually recover the category and position information of the target. Due to the information loss caused by pooling operation, we chose to remove the pooling layer. However, after removing the pooling layer, the receptive field of each layer of the network will become smaller, which will lead to the decrease of the prediction accuracy of the model, so we should remove the sampling operation under pooling without reducing the receptive field of the network? The traditional convolution kernel is to multiply and sum the convolution kernel and input block point by point, while the convolution kernel in deep learning semantic segmentation is to convolute the input area block by a certain pixel. In addition, in this paper, the existing data of public data set is augmented by data enhancement, so as to increase the diversity of training data, avoid over fitting, improve the robustness of the learning model, reduce the sensitivity to some attributes in learning, and improve the generalization ability of the model. In this paper, gaofen-2 optical remote sensing image is selected as the sample. Among these samples, there are four segmentation categories, namely limestone
56
H. Xie et al.
mining area, sandstone mining area, shale mining area and quartzite mining area. The segmentation test results are as follows. 4.1
Category Accuracy
We first analyze the accuracy of each category, and the results are shown in Table 1. Table 1. Category precision of each method (%) Method Method 1 Method 2 The method of this paper
Limestone mining area 84 81.2 91.1
Sandstone mining area 80.2 79 89.5
Shale mining area 70.3 70.7 81.3
Quartzite mining area 87.8 88.3 93.2
From Table 1, we can see the accuracy of each method. In method 1, the accuracy of limestone mining area is 84% that of sandstone mining area is 80.2%, that of shale mining area is 70.3%, and that of quartzite mining area is 87.8%. It can be seen that in method 1, the accuracy of classification is the highest in quartzite mining area. Similarly, in method 2, the highest classification accuracy is quartzite mining area, with an accuracy of 88.3%. In this method, the accuracy of limestone mining area is 91.1% that of sandstone mining area is 89.5%, that of shale mining area is 81.3%, and that of quartzite mining area is 93.2%. The highest classification accuracy is quartzite mining area. In addition, the experimental results show that the accuracy of the proposed method is higher than the other two methods, and method 2 is the worst among the three methods. 4.2
Overall Accuracy and Average Accuracy
Assuming that there are K + 1 label categories to be segmented, pii represents the number of real cases, and pij and pji are false positive and false negative respectively, then the overall accuracy can be expressed as follows: Pk
pii OA ¼ Pk i¼0 Pk i¼0
j¼0
pij
ð6Þ
The average accuracy is a simple improvement of the overall accuracy. After calculating the proportion of the number of pixels correctly classified for each category, the average value of all categories is calculated, this can be expressed as: AA ¼
k 1 X pii Pk K þ 1 i¼0 j¼0 pij
ð7Þ
Semantic Segmentation of Open Pit Mining Area
57
Category accuracy (%)
Compared with the other two methods, the overall accuracy and average accuracy are shown in Fig. 2. It can be seen from Fig. 2 that the overall accuracy of method 1 is 81.8%, and the average accuracy is 80.58%. The overall accuracy of method 2 is 81.5%, and the average accuracy is 79.8%. It can be seen that the overall accuracy and average accuracy of method 1 are better than those of method 2. In addition, the overall accuracy of this method is 89.3%, and the average accuracy is 88.78%. Generally speaking, the overall accuracy of this method is better than the other two methods, and the average accuracy is also better than the other two methods.
Overall accuracy Average accuracy 81.8
80.58
Method 1
81.5
89.3
88.78
79.8
Method 2
The method of this paper
Method Fig. 2. Comparison of overall accuracy and average accuracy of each method
4.3
Average Merge Ratio
The average cross merge ratio is the ratio of the intersection and union of the two sets of real value and predicted value. The average value is taken after the intersection and merger ratio is calculated for each category. This indicator comprehensively reflects the capture degree of the target, which can be expressed as: MIoU ¼
k 1 X pii P Pk k þ i¼0 j¼0 pij þ kj¼0 pji pij
ð8Þ
By comparing the average intersection ratio of this method with the other two methods, the intersection and union of each category is shown in Fig. 3. It can be seen from Fig. 3 that the class merging ratio of each method. Among them, the merging ratio of method 1 is 74.6% in limestone mining area, 70.2% in sandstone mining area, 62.4% in shale mining area and 76.8% in quartzite mining area. Among them, the intersection ratio of shale mining area is the lowest, and that of quartzite mining area is the highest. The average intersection ratio of method 1 is 71%. In method 2, the merging ratio of limestone mining area is 72.6% that of sandstone mining area is 69.8%, that of shale mining area is 61.8%, and that of quartzite mining area is 78.8%. Among them, the intersection ratio of quartzite mining area is higher than that of method 1, others are lower than that of method 1, and the average
58
H. Xie et al.
The method of this paper
Method 2
Method 1 85.9
Category
Quartzite mining area Shale mining area Sandstone mining area Limestone mining area
76.8
78.8
72.9 61.8
62.4
81.3 70.2
69.8
86.6 74.6
72.6
Class merging ratio(%) Fig. 3. Comparison results of each category of each method
intersection ratio of method 2 is 70.75%, which is lower than that of method 1. In this method, the intersection ratio of limestone mining area is 86.6%, that of sandstone mining area is 81.3%, that of shale mining area is 72.9%, that of quartzite mining area is 85.9%, and the calculated average intersection ratio is 81.68%. To sum up, the intersection and merger ratio of the method in this paper is due to the other two methods.
5 Conclusion The classification technology of remote sensing image has experienced two main processes, one is coarse-grained classification at object level, that is image classification, and the other is fine-grained classification at pixel level, namely semantic segmentation. Mineral resources are important energy resources, and environmental monitoring of open-pit mining area is very important. In order to change the problem of low classification accuracy of traditional remote sensing images, this paper proposes a semantic segmentation model of open-pit mining area extraction by combining shallow remote sensing features with deep learning, and tests the model. Experimental results show that the proposed method has high segmentation accuracy and great superiority. Acknowledgements. This paper is part of the Research On The Mining Activity Area Recognition Model With Fusion Of Shallow Features And Deep Neural Network, which is sponsored by Natural Science Foundation of Chongqing, China (cstc2019jcyj-msxmX0657).
References 1. Xie, F., Shi, M., Shi, Z., et al.: Multilevel cloud detection in remote sensing images based on deep learning. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 10(8), 3631–3640 (2017) 2. Chen, W., Li, X., He, H., et al.: A review of fine-scale land use and land cover classification in open-pit mining areas by remote sensing techniques. Remote Sens. 10(1), 15 (2018)
Semantic Segmentation of Open Pit Mining Area
59
3. Cui, W., Wang, F., He, X., et al.: Multi-scale semantic segmentation and spatial relationship recognition of remote sensing images based on an attention model. Remote Sens. 11(9), 1044 (2019) 4. Huang, F., Yu, Y., Feng, T.: Automatic extraction of impervious surfaces from high resolution remote sensing images based on deep learning. J. Vis. Commun. Image Represent. 58, 453–461 (2019a) 5. Zhang, R., Li, G., Li, M., et al.: Fusion of images and point clouds for the semantic segmentation of large-scale 3D scenes based on deep learning. Isprs J. Photogrammetry Remote Sens 143, 85–96 (2018) 6. Huang, F., Yu, Y., Feng, T.: Hyperspectral remote sensing image change detection based on tensor and deep learning. J. Vis. Commun. Image Represent. 58, 233–244 (2019b) 7. Qian, X., Lin, S., Cheng, G., et al.: Object detection in remote sensing images based on improved bounding box regression and multi-level features fusion. Remote Sens. 12(1), 143 (2020) 8. Wei, J., et al.: Satellite data cloud detection using deep learning supported by hyperspectral data. Int. J. Remote Sens. 41(4), 1–23 (2019) 9. Wu, H., Prasad, S.: Semi-supervised deep learning using pseudo labels for hyperspectral image classification. IEEE Trans. Image Process. 27(99), 1259–1270 (2017) 10. Boualleg, Y., Farah, M., Farah, I.R.: Remote sensing scene classification using convolutional features and deep forest classifier. IEEE Geosci. Remote Sens. Lett. 16(99), 1944–1948 (2019)
Energy Storage Technology Development Under the Demand-Side Response: Taking the Charging Pile Energy Storage System as a Case Study Lan Liu1(&), Molin Huo1,2, Lei Guo1,2, Zhe Zhang1,2, and Yanbo Liu3 1
3
State Grid (Suzhou) City and Energy Research Institute, Suzhou 215000, China [email protected] 2 State Grid Energy Research Institute Co., Ltd., Beijing 102209, China Shanghai Nengjiao Network Technology Co., Ltd., Shanghai 200092, China
Abstract. As the energy crisis worsens, the new energy industry is developing rapidly, and the electric vehicles are also becoming popular. At the same time, the development of renewable energy raises new challenges for the operation and regulation of the power grid. Charging pile energy storage system can improve the relationship between power supply and demand. Applying the characteristics of energy storage technology to the charging piles of electric vehicles and optimizing them in conjunction with the power grid can achieve the effect of peak-shaving and valley-filling, which can effectively cut costs. These could be compacted as demand response management service participating in grid regulation, binging economic benefits, which in turn promotes the energy transition. Keywords: Charging pile energy storage system Demand side response
Electric car Power grid
1 Background The share of renewable energy in power generation is rising, and the trend of energy systems is shifting from a highly centralized energy system to a decentralized and flexible energy system. The distributed household energy storage instrument and electric vehicles can provide the flexibility required for this conversion. Electric cars are accepted by increasing families worldwide. In 2014, the European Union issued EU-Richtlinie (2014/94/EU) to promote the application of clean energy such as electricity in transportation and to ensure unified standards for electric vehicle charging. European countries also have their own policy incentives. For example, the German government has set up a non-mandatory goal of 6 Million. Electric vehicles by 2030. The increase in the application of lithium batteries has reduced the price, contributing to the promotion and application of energy storage systems. Energy storage batteries can also be used in demand response. When the user’s grid load is low, the battery © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 60–64, 2021. https://doi.org/10.1007/978-981-33-4572-0_9
Energy Storage Technology Development Under the Demand-Side Response
61
charges; when the grid load is large, the battery supplies its power. This operation pattern can stabilize the grid load and save electricity costs. Intermittent energy storage encourages users to consume electricity when electricity is under surplus supply through electricity prices or subsidies, or other incentives. Taking Germany as an example, the share of renewable energy has exceeded one-third, mainly due to various innovative energy storage projects. In many scenarios, energy storage facilities are replaced by household appliances and electric vehicles. This indirect energy storage business model is likely to overturn the energy sector.
2 Charging Pile Energy Storage System 2.1
Software and Hardware Design
Electric vehicle charging piles are different from traditional gas stations and are generally installed in public places. The wide deployment of charging pile energy storage systems is of great significance to the development of smart grids. Through the demand side management, the effect of stabilizing grid fluctuations can be achieved. Stationary household batteries, together with electric vehicles connected to the grid through charging piles, can not only store electricity, but can also serve to the grid as needed. The system can arrange charging schedule and use the margin to help stability regulation of the grid. The core advantage of the battery is that it can absorb and release a large amount of electricity in a short time, which makes it an ideal tool for providing ancillary services. The famous German virtual power plant operator Next Kraftwerk [1] and the Dutch smart charging supplier Jedlix [2] have already applied this two-way charging mode to pilot projects. The charging pile energy storage system can be divided into four parts: the distribution network device, the charging system, the battery charging station and the real-time monitoring system [3]. On the charging side, by applying the corresponding software system, it is possible to monitor the power storage data of the electric vehicle in the charging process in real time, and match the optimal feature matrix through different time series such as charging capacity and charging speed to achieve high-precision load forecasting and control strategy synchronization. 2.2
Algorithms and Application Examples
The essence of demand-side response is to maintain a balance between the power demand of users and the feed of the grid through price or incentive measures. The user’s power consumption and feedback need to be considered in the control loop, and the equipment involvement may affect the user’s use. Therefore, demand-side response is a complex decision-making problem. Reinforcement learning is an important branch of machine learning. It is a product of the intersection of multiple disciplines and multiple fields, and it specializes in solving continuous automatic decision-making problems. Therefore, reinforcement learning can provide an effective optimal control method in this scenario, and find a relatively optimal strategy in terms of grid stability, user’s electricity cost, and user satisfaction. In demand-side management, from load identification to demand-side response bidding strategies and control strategies,
62
L. Liu et al.
different artificial intelligence algorithm models have played a significant role in practice. Algorithm-driven intelligent charging technology is also the trend of future electric vehicle development and infrastructure construction. The following Table 1 illustrates the current status of electric vehicle development worldwide and the necessity of the implementation of smart charging technology: Table 1. Development of electric vehicles and their charging methods Description •The smart charging technology of electric vehicles can fully schedule the charging cycle to adapt to fluctuations in the power system, enabling the vehicle to be integrated into the power system in a user-friendly manner •Main contributions: Smart charging for electric vehicles (which can help reduce renewable energy consumption while avoiding additional load and infrastructure costs for peak demand •Vehicle-to-grid (V2G) technology can supply power to the grid when needed, bringing greater flexibility to the system
Development •In 2019, there were 3.81 Million Electric vehicles driving on the street in China •In the past six years, the compound annual growth rate of sales was 57% •The largest markets for electric vehicles: China, Germany, Norway, the United Kingdom, and the United States •If all light vehicles are electric vehicles, they will account for the share of total electricity demand: 24% in the United States; 10% to 15% in Europe. If smart charging technology is not adopted, peak demand will be affected
•The potential of smart charging to adapt to the charging time depends largely on the type of vehicle, the charging location, and the power and speed of the charging device
In other algorithm-related cases: Canadian company EnPowered [4] also makes full use of machine learning algorithms to help large power users control peak shifts, thereby reducing electricity bills. When calculating the electricity costs, the maximum load of users is calculated based on several system peak moments that are rolled out daily, and the contribution rate of each user in the system peak is calculated to determine the electricity cost. Determining the electricity consumption plan in advance and avoiding the peak detection is an effective way to reduce the electricity fee. In the machine learning algorithm, in addition to considering daily production schedules, holidays, etc., factors such as temperature fluctuations and other user responses to load also become the input conditions of the algorithm. The user’s 15-min meter data and external influencing factor data in the last three years are adopted and divided into a training set and a test set after cleaning. The test set is used to continuously track the prediction accuracy, and iterative training improves the model. The traditional demandside response method is generally based on the fixed electricity price model and set the peak shaving instruction logic under different load levels. It fails when faced with
Energy Storage Technology Development Under the Demand-Side Response
63
flexible electricity pricing methods, especially calculation methods linked to the total load of the system, machine learning algorithm has its advantages.
3 Development of Charging Pile Energy Storage System 3.1
Movable Energy Storage Charging System
At present, fixed charging pile facilities are widely used in China, although there are many limitations, such as limited resource utilization, limited by power infrastructure, and limited number of charging facilities. Facing the problems of stationary electric vehicle charging systems, some scholars have designed a mobile energy storage electric vehicle charging system [5], which can charge electric vehicles more conveniently and utilize the characteristics of energy storage technology. It alleviates the unstable load during the charging process and improves equipment utilization. The charging system not only overcomes most of the current shortcomings of traditional fixed charging systems, but also makes the charging process more accessible, and the endurance performance of electric vehicles could also be improved. 3.2
Photovoltaic Energy Storage Charging System
Global grid-connected solar capacity reached 580.1 GW at the end of 2019, along with 3.4 GW of offgrid PV, according to the International Renewable Energy Agency. [6] The energy transition will be further accelerated. According to the climate goals in the Paris Agreement, in order to achieve decarbonization of the power sector, by 2050, renewable energy will account for 85% of the total power generation. Solar and wind power generation capacity will increase from the current 900 GW to 13,000 GW, accounting for 60% of the total power generation. Compared with other types of charging systems, the photovoltaic energy storage charging system is characterized with green energy. It not only has the function of energy storage charging system to cut peaks and fill valleys, which is beneficial to the operation of the grid, but also effectively utilizes green energy to relieve energy pressure. German private households are also increasingly accepting household photovoltaic energy storage. Currently, about half of new residential solar photovoltaic systems are equipped with energy storage battery systems. At present, the leading German companies in household photovoltaic energy storage are Sonnen [7] and Solarwatt [8]. For example, Sonnen plans to build the world’s largest household energy storage community, connecting tens of thousands of households in a ``virtual power plant'', and household photovoltaic owners can share power in the virtual community. Solarwatt uses the energy storage system and home energy management system to integrate Internet technology to achieve energy consumption control among home devices.
64
L. Liu et al.
4 Conclusion In the context of demand response, electric vehicles have obtained a more flexible development environment, which has become an important measure for the diversification of the energy supply and reduction of the dependence on oil. In the foreseeable future, the power generation cost of renewable energy will gradually be lower than the traditional power generation cost. With the continuous development of science and technology and the expansion of the industrial chain, the competitiveness of renewable energy is significantly improving. In addition, driven by a series of laws and regulations and market incentive plans, new-energy-related investments will also become easier. The threshold for participating in renewable energy projects and energy storage projects will be greatly reduced. The transformation of the energy supply model from large-scale central energy supply to distributed small-scale power plants will allow the public participants to take up their initiative so as to give full play to their potential and thus stimulate more new business models.
References 1. https://www.next-kraftwerke.com 2. https://www.jedlix.com/en/ 3. Wu, M., Dai, C., Deng, H., et al.: New photovoltaic energy storage power generation system based on single photovoltaic/single energy storage battery module. Power Syst. Prot. 4. https://enterprise.en-powered.com 5. Li, A., Liu, Y, Meng, Y., et al.: Energy storage type electric vehicle charging system. Technol. Mark. (11) (2019) 6. 2020 International Renewable Energy Agency 7. https://sonnen.de 8. https://www.solarwatt.com
Oracle’s Application in Finance Lin Bai(&) China Merchants Group Postdoctoral Research Station, Shekou, Shenzhen, China [email protected]
Abstract. This paper introduced a mechanism and bridge linking the blockchain world to the real world, which named as Oracle. It aims to solve the problem that blockchain and the real world are fragmented. Furthermore, it has listed several fields of application in finance of the Oracle. Keywords: Oracle
Blockchain Financial application
1 Introduction In 2008, Nakamoto published a white paper “Bitcoin: A Peer-to-Peer Electronic Cash System” that opened up the encrypted digital currency industry [1]. At that time, cryptocurrency was characterized by peer-to-peer electronic cash, and anyone could unimpededly take a digital asset sending to any place in the world [2]. In 2014, Vitalik Buterin created Ethereum and opened the era of digital currency 2.0 [3]. Ethereum is a decentralized blockchain platform with Turing’s complete smart contracts, which provides a place to allow people to build decentralized application DAPPs. Many DAPPs need to interact with real-world data, so a technology is needed. To provide support, this is the virtual machine Oracle [4]. In the world of blockchain, it has nothing to do with Oracle and the database, nor does it have any divine power to predict the future. Simply saying, an Oracle is an intermediary that provides whatever data and results you need. In the world of blockchains, it is the intermediary that provides external data to the blockchain, and can be seen as a link between the blockchain and the outside world. So why do we need a prophecy machine or the Oracle? It is because that in the blockchain, only the data in the chain can be obtained, but the data in the real world outside the chain cannot be accessed, which means that the blockchain is separated from the real world [5]. The reason for this problem is that the blockchain is a closed and deterministic self-consistent system. The blockchain is a consensus-based system that guarantees each node will achieve the same result when verifying the same program. Therefore, it must be based on the internal information of the system to ensure this, and can not be based on information that is uncertain in the outside world. And as more nodes join the network, the new node needs to replay all the transactions in the previous blockchain. At this time, the number of clicks to get the post may be completely different, and the new node can’t confirm that the original data on the chain is correct or not. Then this consensus mechanism for blockchains will collapse. Therefore, the blockchain cannot open this actively synchronized network port © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 65–70, 2021. https://doi.org/10.1007/978-981-33-4572-0_10
66
L. Bai
for obtaining external data [6]. In order for smart contracts to do something based on data from the outside world, it is necessary to take the initiative to be passive and to synchronize asynchronous. If you can’t take the extra-link data, you must rely on the mediator to input the data. An out-of-chain data or event is sent to the chain as a transaction via an Oracle, becoming a deterministic input. It can be quoted by a smart contract. If there is no Oracle, it will lead to the isolation of the blockchain from the outside world. All application scenarios that need to interact with the outside world are not possible. This will greatly limit the development of the blockchain ecology. To sum up, the function of the oracle is to provide real-world data to the chain. The most important feature is to ensure that the prophecy machine itself does not do evil and does not tamper with the data.
2 Smart Contract and Oracle 2.1
Smart Contract
The concept of a smart contract was born in 1995 by the jurist Nick Saab [7]. By definition, a smart contract is a set of digitally defined commitments, including agreements on which contract participants can execute these commitments. In general, a smart contract is a contract that can be automatically executed in a computer system when certain conditions are met. As a concept of almost the same age as the Internet, smart contracts have not been applied to the actual industry due to the lack of a credible execution environment. Since the birth of Bitcoin, it has been found that blockchain technology can provide a natural platform for the delivery of credit, thus providing the soil for executing smart contracts. For the first time, Ethereum combines smart contracts with blockchains to create Turing’s complete public chain platform. It is not just a computer program that can be executed automatically, but further, it is itself a participant in the system. It responds to the received message, it can receive and store value, and it can send out information and value. This program is like a person who can be trusted to temporarily keep assets and always follow the rules. The model of a smart contract can be roughly as shown in the following Fig. 1. As a piece of code, it is deployed in a shareable and duplicated book, can maintain its own state, control its own assets and respond to received external information or assets. 2.2
Dilemma of Smart Contract
The core innovation of the blockchain is that it can transfer value without trusting third parties, so for a public chain with a sufficiently low degree of decentralization, the information conveyed on the chain can guarantee authenticity. However, there is still a problem that cannot be solved. People live in a world under a chain. Many rules of experience and conclusions are summarized in the data obtained under the chain, and writing data from the chain to the chain requires people to carry out [8]. Therefore, it is inevitable that there will be a gap in the transmission of information from the chain to the chain.
Oracle’s Application in Finance
67
For example, if you want to calculate the sum of the balances of the two accounts in the smart contract, A + B, because the balances A and B of the two accounts are all information in the system, they are all determined information when a node calculates. After finishing the result of this sum, the result is released, and it is easy to verify whether it is correct on any node, because this is a certain result. But if we are in a smart contract, we need to do some calculations based on the number of clicks on a post on the Internet. Then this is a kind of information outside the system. It is uncertain. The results obtained by going to the outside world to get this data C at different nodes may be different. Even the results of the same node going to the outside world at different times may be different. Therefore, the nodes cannot verify each other’s correctness.
Fig. 1. The smart contract
As a Dapp running on a global public chain, the project side is anonymous for most players. If the written result of the match is not written in the chain, the lost investor will defend his rights. This process still needs to believe in the authenticity of the blockchain and traditional Internet data transmission [9]. 2.3
Solution Introduced by Oracle
Therefore, the prophecy machine solves precisely the difficulty of how the real contract data is actually linked in the execution process of the smart contract. The triggering and operation of a smart contract cannot be executed without a data source. Without such a platform for providing external information, smart contracts can only be applied in a few places, and the usability will be greatly reduced [11]. However, with these Oracle system, smart contracts can be applied to almost every realm in the real world. Once the data is passed into the blockchain, it can be used as an input or a use case for executing the contract, and the resulting changes are disruptive for most industries [10]. This is mainly due to the huge difference between the blockchain world and the real world. A blockchain is deterministic, which means that it is a reflection of a specific event that occurs one after another, that is, a series of order-specific and causal “transactions”.
68
L. Bai
However, information that is accessed outside the chain is not the case, they can be discontinuous, therefore the information cannot be trusted or used in the blockchain. This property of the blockchain gives it invariance, but reduces flexibility and scalability. Information outside the chain is somewhat non-deterministic, meaning that the specific order in which events do not occur can cause problems in transparency. Because in this process, how to ensure that the data transmitter does not modify the data in the process, how to authenticate whether the node receiving the external data is a single node or all nodes receive synchronously, how the external data reaches consensus in the blockchain is the same data. The problem facing transparency. Since the blockchain itself is a world constructed by a decentralized system, the blockchain does not understand how the external information is input into the blockchain, how many blockchain nodes access external information, and whether it changes. In the centralized solution, there will be many variables. The incompatibility of the blockchain world with the real world makes it necessary to provide an Oracle to make two-way communication between them possible. As a platform for providing external information, the Oracle provides the necessary conditions for smart contracts to meet the operational requirements of the contract by establishing a credible data gateway between the blockchain and the Internet, breaking the shackles of smart contracts to obtain data, and enabling them to access Internet data while ensuring credibility.
3 Oracle’s Application in Finance In fact, for the function of the Oracle, its applicable scenarios are very extensive, as long as it requires reliable external data input or power calculation. 3.1
Gambling Game
Gambling game all require random numbers, and it is impossible to generate secure random numbers in the chain. The gambling games that have been frequently sent on EOS in the past time are frequently hacked because the project developers use unsafe and incorrect random generation ways of the number. Introducing a safe, unbiased random number from the chain through a predictor is a correct solution to this type of problem. 3.2
Financial Derivatives Trading Platform
The derivatives trading platform provides financial intelligence contracts that allow users to short or long-term assets, such as the Market Protocol. Decentralized Derivatives Association, similar services are provided by DyDx Protocol and others. Such smart contracts require real-time acquisition of asset prices from outside the chain to determine the gains and losses of the parties involved, as well as trigger closing transactions.
Oracle’s Application in Finance
3.3
69
Stable Currency
Stable currency is a cryptocurrency with a stable exchange rate with French currency. Stable currency can be used as an intermediate medium for the storage and trading of value, and is therefore regarded as the holy grail in the digital currency world. The stable currency here does not refer to the currency issued by a centralized organization like tether or digix, but rather a decentralized cryptocurrency that is automatically controlled by algorithms, including stability based on encrypted asset collaterals such as Dai. 3.4
Lending Platform
Decentralized P2P lending platforms such as SALT Lending, ETH lend, etc. allow anonymous users to lend out legal or encrypted assets with cryptographic asset collateral on the blockchain. Such applications require oracle to provide price data at the time of loan generation, and can monitor the margin ratio of the encrypted collateral, issue a warning and trigger a clearing procedure when the margin is insufficient. The lending platform can also use oracle to import the social and credit and identity information of the borrower to determine different lending rates. 3.5
Insurance Application
Etherisc is building an efficient, transparent and low-consumption decentralized insurance application platform, including aviation delay insurance, crop insurance and more. The user pays the premium with ether, purchases the insurance, and automatically pays according to the insurance agreement. Oracle can introduce external data sources and events for such applications, help decentralized insurance products make claims, and schedule future automatic payments. 3.6
Forecast Market
Decentralized forecasting markets such as Augur, Gnosis, etc., apply the wisdom of the group to predict real-world outcomes, such as presidential elections and sports outcome quiz. When the result of the vote is questioned by the user, Oracle is required to provide a true final result.
4 Conclusion This article briefly introduces the problems encountered in the blockchain world smart contract and how the Oracle solves the problem. Secondly, it briefly introduces the application of virtual machine or Oracle in the financial field of blockchain world.
70
L. Bai
References 1. Nakamoto, S.: Bitcoin:a peer-to-peer electronic cash system (2009). https://bitcoin.org/ bitcoin.pdf 2. Godsiff, P.: Bitcoin:bubble or blockchain. In: Proceedings of the 9th KES International Conference on Agent and Multi-Agent Systems:Technologies and Applications (KESAMSTA) (2015) 3. Ethereum (2015). https://github.com/ethereum/wiki/wiki/WhitePaper 4. The promise of the blockchain:the trust machine. Economist (2018) 5. Blockchain now and tomorrow: European Commission (2019) 6. Mary, J., Matthieude, L., Vincenzo, P., et al.: Use cases for blockchain in the energy industry opportunities of emerging business models and related risks. Comput. Ind. Eng. 137, 106002 (2019) 7. Cong, L.W., He, Z.: Blockchain Disruption and Smart Contracts. Social Science Electronic Publishing (2018) 8. Asgaonkar, A., Krishnamachari, B.: Solving the Buyer and Seller’s Dilemma: A DualDeposit Escrow Smart Contract for Provably Cheat-Proof Delivery and Payment for a Digital Good without a Trusted Mediator (2018) 9. Benítez-Martínez, F.L., Hurtado-Torres, M.V., Romero-Frías, E.: A neural blockchain for a tokenizable e-participation model. Neurocomputing (2020) 10. Kochovski, P., Gec, S.,Stankovski, V., Bajec, M., Drobintsev, P.D.: Trust management in a blockchain based fog computing platform with trustless smart oracles. Future Generation Computer Systems 101, 747–759,101 (2019) 11. Prasad, V., Srinivasa Rao, T., Prasad Reddy, P.V.G.D.: Improvised prophecy using regularization method of machine learning algorithms on medical data. Personal. Med. Univ. 5, 32–40 (2016)
The Co-construction and Sharing Mechanism of University Library Resources Based on the Hyper-network Perspective Xinyu Wu1,2(&) 1
International College of NIDA, National Institute of Development Administration NIDA, 118 Moo3, Serithai Road, Klong-Chan, Bangkapi, Bankok 10240, Thailand 2 Library of SWU, Southwest University, No. 2 Tiansheng Road, Beibei District, Chongqing 400700, China [email protected]
Abstract. The digital resources of academic libraries have been greatly enriched, how to improve the co-construction and sharing (CCS) of digital resources of academic libraries has become a major issue we face. Based on the supernetwork perspective, this paper studies the resource CCS mechanism of university libraries, in order to better achieve the purpose of constructing university library (UL) information resources CCS. Construction of information resources in various universities in our province, this paper found that 76.74% of the respondents believe that the construction of information resources in university libraries lacks a unified planning and organization, and digital resources are redundantly constructed and inefficient waste; 89.24% think that there is a lack of a unified and authoritative management organization. And put forward corresponding countermeasures and suggestions to improve the level of resource CCS of university libraries and ensure the sustainable development of resource CCS of university libraries. Keywords: Hyper network construction and sharing
University library Information resource Co-
1 Introduction Library is an important place for teachers and students to learn knowledge, as well as an extension and expansion of education and teaching. It plays an important role in increasing amateur knowledge, cultivating learning interest and establishing a correct outlook on life [1]. With the continuous expansion of college enrollment, the demand for information resources will inevitably increase. However, books and periodicals, electronic resource prices surge, library funding is limited. On the one hand, the demand of university teachers and students for library information resources is increasing; On the other hand, library resources are short and underutilized. It is urgent for the library service on our research, in order to better serve the public, to obtain the maximization of social benefits [2, 3]. UL service has provides a wide range of paper from the text to digital resources service. With the constantly breakthrough of science © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 71–77, 2021. https://doi.org/10.1007/978-981-33-4572-0_11
72
X. Wu
and technology, the popularity of network technology and the application of electronic information resources, change the traditional books document the situation of unify the whole country [4, 5]. It is impossible for any library to attempt to collect all the digital information resources in the world. Therefore, mutual cooperation, sharing and win-win is an inevitable choice for library resource construction, and it also provides conditions for interlibrary cooperation and co-construction [6, 7]. Strengthen the research of library information resources related services, and explore how libraries can use limited information resources to provide richer services [8]. Break the limitations of the single library’s lending function, develop various services, make full use of the advanced conditions of science and technology in the network age, promote the integration of resources and complement each other in the library [9, 10].
2 The Characteristics and Significance of the CCS of Resources Between Hypernetworks and University Libraries 2.1
Definition of Hyper Network
Super network refers to the network of economic transactions, knowledge communication and social relations formed by mutual interaction between companies, internal employees, intelligent units and external customers, partners, competitors, public organizations and intermediary institutions. The network is connected in some links and mechanisms through different subjects and different relationships. Generally speaking, they are complementary and synergistic on the whole, and have the characteristics of multi-level, multi-level, multi-dimensional flow, multi-quasi attribute and multi-function. 2.2
Features and Significance of UL Information Resources CCS
UL information resource sharing main characteristic is through the network environment according to customer demand for the library information resources, relying on network tools, such as electronic communications, computer, Internet and multimedia technology, the description of the effective digital information resources and network resources, and digitize and effective integration, provide the information needed to demanders resources in the form of information network carrier finally, in order to service more quickly. The rapid increase of the amount of information any independent college library can not meet the needs of the customer information resources, so it is necessary for the UL information resource sharing. To standardize the resources planning, to provide fast, comprehensive, integrated services. Universities cover more and more disciplines and majors, and the demand for information increases. However, due to factors such as obsolete infrastructure, construction fund shortage, many colleges and universities library development lags behind the development of the school as a whole. At the same time due to the regional distribution of China and the uneven
The Co-construction and Sharing Mechanism of University Library Resources
73
development of each university, promoting the integration of library resources and complementary advantages is an important development direction of modern libraries.
3 Research Methods This article mainly adopts literature research method, comparative analysis method, questionnaire survey method and interview method. Literature research: understanding by reading a lot of literature at home and abroad about the library information resources sharing in the development process, broaden your search further by collecting data and information for further processing, sorting, analysis, summary. Comparative analysis: by comparing the typical college information resources sharing mode at home and abroad, and analyzes the advantages and differences of its existence, the problems they encountered in practical problems summarized summarized. Combined with the actual situation in our province, the province put forward in favor of UL information resources sharing model. Questionnaire survey method: issue questionnaires to teachers and students of some universities and the public in our province to understand the basic construction of information resources in universities in our province. Clarify user needs and put forward reasonable suggestions for improving reader service. Interview method: through the interview method, the relevant leaders and staff of some university libraries in our province are consulted about the relevant problems existing in the process of CCS of information resources in our province’s university libraries.
4 Results Analysis and Realization of Information Resource CCS Mechanism 4.1
Survey Results and Analysis
A total of 300 questionnaires were distributed, of which 288 valid questionnaires were recovered, with a recovery rate of 96%. The collected questionnaires are sorted, summarized and analyzed, as shown in Table 1 and Fig. 1. Through the survey of the basic construction of information resources in various universities in our province, it can be seen that 76.74% of the respondents believe that the information resource construction of university libraries lacks a unified planning and organization, and that digital resources are redundantly constructed and inefficient and wasteful; 82.64% think Network technology is lagging behind, unable to update in time, upgrading hardware and software, there are problems of blocked information transmission and unblocked channels; 89.24% think that there is a lack of a unified and authoritative management organization; 74.65% think that the work of various university libraries is standardized, Standardization is not uniform; 68.40% think that college campus network lacks all kinds of examination information and electronic resources for national defense education.
74
X. Wu Table 1. Current situation of CCS of information resources in academic libraries
Status quo The construction of information resources in various university libraries lacks unified planning and organization, repeated construction of digital resources, and inefficient waste The network technology is lagging behind, unable to update in time, upgrading hardware and software, there is a problem of blocked information transmission and unblocked channels Lack of a unified and authoritative management organization The standardization and standardization of various university libraries are not uniform College campus network lacks all kinds of examination information and electronic resources for national defense education
Percentage 76.74%
82.64%
89.24% 74.65% 68.40%
College campus network lacks all kinds of examination information and electronic resources for national defense education The standardization and standardization of various university libraries are not uniform
Lack of a unified and authoritative management organization The network technology is lagging behind, unable to update in time, upgrading hardware and software, there is a problem of blocked information transmission and unblocked channels The construction of information resources in various university libraries lacks unified planning and organization, repeated construction of digital resources, and inefficient waste 0.00%
20.00%
40.00%
60.00%
80.00%
100.00%
Percentage
Fig. 1. The current situation of CCS of information resources in academic libraries
4.2
Realization of the CCS Mechanism of UL Information Resources
4.2.1 Establish a Specialized Agency for Information Resource Sharing and Strengthen the Coordination Organization for the CCS of Information Resources According to the current status of college libraries in our province, there is an urgent need to establish an authoritative coordination agency as soon as possible, unified leadership, comprehensive planning, overall arrangements, and planned and step-bystep establishment and improvement of information resources CCS systems, otherwise information resources CCS It can only stay in theory, it is difficult to become a reality. This function can be performed by the library and information work committee of colleges and universities. The UL and Information Work Committee is a professional organization responsible for the library and information work of universities, and it is the authoritative core of contacting and coordinating the work between university
The Co-construction and Sharing Mechanism of University Library Resources
75
libraries. The higher-level departments should give certain powers to the university’s drawing committee, strengthen the authority of the university’s drawing committee, so that it can plan and layout the information resources of each university in a unified way, formulate regulations, and realize the rational allocation and unified management of various resources Establish a centralized, unified, efficient, coordinated, and standardized management system, and ensure the smooth development of this work through strong organizational guarantees. Establish a coordinating organization for information resource sharing to help understand the data and materials of each library, the level and ability of CCS, and provide guidance according to the different development situations of each university. It is helpful to promote the consistency of the technical conditions and standards of the digital library system of various universities in the province. 4.2.2 Strengthen the Construction of Supporting Facilities in University Libraries The resource CCS services of university libraries are inseparable from the use of computer systems and multimedia equipment. Regardless of the collection, inquiry, retrieval, browsing and printing of information resources, computer multimedia equipment has become an important foundation and technical guarantee for information resource sharing projects. At present, the computer multimedia equipment in university libraries is outdated and aging, which cannot meet the needs of users for modern information technology. In addition, the relative shortage of computer equipment makes it difficult to meet the needs of our teachers and students for electronic information. When university libraries initially purchase hardware equipment, they only choose based on their own funding level and needs. Affected by factors such as the type of computer equipment and the technical level of sharing services, it has further increased the continuous development of information sharing among universities. This requires us to make a unified plan to add various equipment dedicated to information resource sharing when the budget permits. Further increase the standardization of various hardware facilities for information sharing in universities. At the same time, hire experts in related fields to conduct technical debugging to ensure the smooth development of shared services. 4.2.3 Improve the Fund Guarantee System A large amount of funds is required for the joint construction and sharing of UL information resources. Limited by the funding sources of various universities, the input of university libraries in CCS projects is also very limited. The CCS of information resources helps university libraries to share the results of cooperation, improve service capabilities, avoid repeated and ineffective purchases of information resources, and save the expenditures of each member library. It is the best choice for the sustainable development of university libraries. In the initial stage of the CCS project, a large amount of capital must be invested for the initial construction, and a large amount of follow-up capital is also needed in the later stage to maintain the normal operation of the entire system, promote the application, and the continuous update of resources. The government department is an important organizer and coordinator of the information resource CCS of university libraries. The information resource CCS project cannot be
76
X. Wu
separated from the strong support and cooperation of relevant government departments. The government’s financial appropriation can ensure a stable financial investment in the CCS project, and it is a powerful guarantee for the CCS of information resources in university libraries. At the same time, we must fully mobilize all aspects of society, give play to the role of social organizations, and fully mobilize their enthusiasm. Give full play to the government’s organizational and leadership functions, and use administrative and economic means to promote cooperation between university libraries on the one hand. In addition, we also need to raise funds in many ways, and call on the general public to participate by publishing recruitment information. This can effectively supplement the shortcomings of insufficient financial funds and promote the development of joint construction and sharing projects. It is also helpful for colleges and universities to use their own advantages to provide technical support to social groups and promote social progress.
5 Conclusion In summary, perfecting the CCS operating mechanism of digital information resources in academic libraries is the guarantee for the development of academic libraries. University libraries should participate in the sharing system with their own resource advantages, and actively establish and improve the information resources coconstruction and sharing mechanism system of university libraries to ensure the smooth and effective operation of the co-construction and sharing system. Although the current situation of the co-construction and sharing of information resources in our university libraries is not optimistic, there are many difficulties in theory and practice, which hinder the pace of co-construction and sharing of information resources. However, we must fully realize the difficulties we are facing, actively seek solutions, and innovate and develop the co-construction and sharing model of UL information resources in practice. I believe that with the development of network and information technology and the establishment of various related system standards, with the joint efforts of various university libraries, the co-construction and sharing of information resources of university libraries will surely make great progress. Acknowledgements. This work was supported by 203771. In 2020, chongqing higher education teaching reform project (steering committee) project (203771) under the “major public emergency mechanism of emergency management in university library and services research”.
References 1. Tian, X.: Research on co-construction and sharing of higher vocational education information resources based on cloud computing. Revista De La Facultad De Ingenieria 32(11), 984–989 (2017)
The Co-construction and Sharing Mechanism of University Library Resources
77
2. Alonso Gaona-Garcia, P., Sanchez-Alonso, S., Fermoso, G.A.: Visual analytics of Europeana digital library for reuse in learning environments a premier systematic study. Online Inf. Rev. 41(6), 840–859 (2017) 3. Calvert, P.: Library technology and digital resources: an introduction for support staff. Electron. Libr. 35(5), 1066 (2017) 4. Khan, A., Ahmed, S., Khan, A., et al.: The impact of digital library resources usage on engineering research productivity: an empirical evidences from Pakistan. Coll. Build. 36(2), 37–44 (2017) 5. Ahammad, N.: Open source digital library on open educational resources. Electron. Libr. 37(6), 1022–1039 (2019) 6. Liu, H.: Research on the integration of library information resources based on digital trend. Revista de la Facultad de Ingenieria 28(3), 2276–2279 (2017) 7. Fatima, A., Abbas, A., Ming, W., et al.: Analyzing the academic research trends by using university digital resources: a bibliometric study of electronic commerce in China. Univ. J. Educ. Res. 5(9), 1606–1613 (2017) 8. Mandalia, H., Parekh, S.K.: Awareness and utilization of digital library by library users of ARIBAS colleges: a study. Int. J. Indian Psychol. 4253(3), 2348–5396 (2017) 9. Rosman, M.R.M., Ismail, M.N., Masrek, M.N.: Investigating the determinant and impact of digital library engagement: a conceptual framework. J. Digit. Inf. Manage. 17(4), 214 (2019) 10. Khan, A., Masrek, M.N., Mahmood, K.: The relationship of personal innovativeness, quality of digital resources and generic usability with users’ satisfaction: a Pakistani perspective. OCLC Syst. Serv. 35(1), 15–30 (2019)
The Development of Yoga Industry in China Under the Background of Big Data Yu Fan1,2(&) 1
2
Shanghai University, Shanghai, China [email protected] Chinese Academy of Social Sciences - Shanghai Research Institute of Shanghai Municipal People’s Government, Shanghai, China
Abstract. With the progress of science and technology, the arrival of the era of “big data” brings unprecedented opportunities for the yoga industry; The information and technical support based on big data will help the yoga industry to dig and create more values in China, promote the clustered development of The yoga industry in China, and promote the scientific and systematic development of the yoga industry in China. At the same time, the arrival of the era of big data has also brought a series of challenges to the development of Yoga industry in China. Solutions to these challenges are proposed to accelerate the collaborative innovation of yoga industry and promote the healthy and sustainable development of Yoga industry in China. Keywords: Big data
Chinese yoga Yoga industry
1 Introduction In McKinsey Global Institute, big data is defined as a kind of data set with a large scale, which greatly exceeds the capabilities of traditional database software tools in terms of acquisition, storage, management and analysis. It is characterized by massive data scale, rapid data flow, diverse data types and low value density [1]. Yoga originated in India and was introduced to China in the 1980s in the form of Qigong. Since the beginning of the 21st century, yoga, as a form of fitness, has been deeply loved by the Chinese people, especially women. Yoga asanas help women to achieve the perfect figure, showing women’s confidence and charm; In addition, some types of yoga, such as maternity yoga, help women reduce psychological and physical stress during pregnancy; Postpartum repair yoga, not only can prevent postpartum depression, but also can help women recover faster figure; The advent of the era of big data is conducive to more in-depth mining of customer needs, and better promote the healthy development of China’s yoga industry.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 78–84, 2020. https://doi.org/10.1007/978-981-33-4572-0_12
The Development of Yoga Industry in China
79
2 Problems Existing in the Development of China’s Yoga Industry 2.1
The Theoretical System of Yoga Industry Innovation Needs to Be Improved
At present, although There is a huge demand group for yoga in China, globally speaking, the development of China’s yoga industry is in the growth stage and a complete industrial development system has not yet been formed [2]. The practice of yoga industry is guided by the scientific concept of development, so as to ensure the healthy development direction of yoga industry [3]. We should clearly realize that the full exploration of theoretical research is the basis for the healthy development of yoga industry. 2.2
The Innovation of Yoga Industry Lacks Rational Thinking
At present, the development of yoga industry in China is mainly based on the operation of yoga studios, and the yoga industry has been over-commercialized. In this form, from the perspective of enterprise management, it is reasonable for merchants to maximize their own interests. However, from the perspective of caring for life, it violates the goal of yoga to achieve perfect harmony between body and mind and achieve self-realization. The excessive pursuit of economic interests by most of the yoga studio owners, coupled with the lack of rational research on the development of the yoga industry, has led to the unhealthy phenomenon that the yoga industry attaches more importance to technology than health, and more importance to interests than humanity.
3 The Development of Yoga Industry in China Under the Background of “Big Data” 3.1
Development Characteristics of Yoga Industry in China in the Era of Big Data
3.1.1 The Downstream Consumers Increase Steadily and the Industrial Scale Growth Slows Down The number of yoga practitioners in China has maintained a steady growth, reaching 12.5 million in 2018 [4] (Fig. 1). 3.1.2 Source: Foresight Industrial Research Institute Due to the steady growth of the scale of yoga practitioners and the increase of offline yoga studios and studios, the scale of China’s yoga industry maintained a high growth rate before 2017 [5]. In the past two years, with the increasingly fierce competition of offline yoga institutions and the limited increase of customer unit price, the scale growth of China’s yoga industry has slowed down. In 2017, the market size of Yoga in China was 25.36 billion yuan, up 45.2 percent year-on-year. In 2018, the market size of Yoga in China was about 32.21 billion yuan, with a year-on-year growth of 27.0%. It is
80
Y. Fan
1400 1200
1250
1000 800
880
600
630
400 400
200 0
2009
2012
2015
2018
Fig. 1. 2009–2018 Number of Yoga practitioners in China (unit: 10000)
estimated that by 2020, the size of China’s yoga market will increase to 46.76 billion yuan, with the year-on-year growth rate continuing to slow to 18.7% (Fig. 2).
50.00%
500 450
467.6
40.00%
400 393.9
350 300
322.1
35.00% 30.00% 25.00%
250 253.6
200 150
45.00%
174.6
20.00% 15.00%
100
10.00%
50
5.00%
0 in 2016 in 2017 in 2018 in 2019 Market size of Yoga Industry (unit: 100 million YUAN)
0.00% in 2020 Growth rate (%)
Fig. 2. 2012–2020 Market size of Yoga Industry in China (unit: 100 million YUAN, %) Source: Foresight Industrial Research Institute
The Development of Yoga Industry in China
81
3.1.3 The Expansion of Mid-Tour Yoga Studio Is Limited, and the Online Market Is Relatively Concentrated Yoga industry chain services (mainly from offline yoga) with course as the center, including the coach training, yoga, and clothing, tools, and business training and other products and services to provide, the upstream is yoga props manufacturers, coach training institution, the middle is online training institutions and offline yoga hall, studio, downstream is vast crowd in yoga [6]. Among them, the upstream yoga clothing and auxiliary equipment appeared earlier. However, due to the relatively slow and scattered development of the early yoga industry, the demand for yoga clothing and auxiliary equipment is relatively small. Therefore, at the present stage, the distribution of enterprises is relatively dispersed, giant brands are relatively vacant, and the market has not entered into perfect competition [7] (Fig. 3).
upstream
midstream
Clothing, auxiliary
Online training APP
tools production.
and website
Coach training
Offline yoga studio,
downstream
Yoga exercisers
Fig. 3. Yoga industry
3.1.4 Source: Foresight Industrial Research Institute Compared with the dispersed, small-scale upstream industry, the competition in the middle of yoga industry is more intense. In offline yoga studios, whether large or small, the main source of revenue is the course services provided to yoga students [8]. Therefore, yoga studios of different sizes adopt different business models to increase revenue as much as possible. For example, large and medium-sized yoga studios make use of scale effect to maintain profits, while small studios mainly take private teaching courses to build a small but beautiful business model, and they mainly win out by excellent service and coaches [9] (Fig. 4).
82
Y. Fan
Site Size ()
Average daily flow (person)
Profit model The annual, quarterly and monthly CARDS are the main ones, and the group courses are the most Flexible in form, pay equal attention to group lessons and private education To small class group class or private teaching
Large and medium-si zed
500-1200.
150-250.
Small and medium-si zed small
200-500.
50-150.
200 the following
10-50
Fig. 4. Business model of offline yoga studio
3.1.5 Source: Foresight Industrial Research Institute Due to the problem of customer unit price and cost increase, the development of offline yoga studio is relatively slow in the past two years. In particular, with the development of knowledge payment and online teaching, the yoga industry has undergone a greater shakeout. As the consumption of mobile terminal accounts for more than 80% of the daily consumption activities of the average person, offline yoga studios have been more impacted [10] (Fig. 5).
Online course consumption 31.10% Offine course consumption 68.90%
Online course consumption
Offine course consumption
Fig. 5. Core Yoga users yoga course consumption structure (unit: %) Source: Foresight Industrial Research Institute
The Development of Yoga Industry in China
3.2
83
Big Data Will Help Mining Customer Demand
In the past two years, both the yoga general population and the core population have been significantly improved, and the differentiation characteristics between various user groups are also getting bigger and bigger [11]. Yoga apps and offline venues need to target the target population at the early stage of course design, and conduct course design, marketing, advertising and operation management according to their needs. In addition, with the growth of population size, the categories of courses are also becoming diversified. Therefore, in the future, yoga enterprises will mine demands through big data to improve customer stickability and reduce market entry and marketing costs [12].
4 Development Strategy of Yoga Industry in China in the Era of Big Data 4.1
Do Data Processing Well
The validity of data processing and data analysis depends on the reliability of data. Therefore, we should make full use of various instruments and equipment to ensure the accuracy of data when collecting data. For example, basic body data such as body weight, body fat rate, protein content, daily exercise habits, diet and sleep status, and exercise needs were collected. After the data is collected, the data is quickly partitioned and processed. Through data processing and analysis, we can provide customers and operators of yoga industry with more accurate information [12]. 4.2
Improve Information Security
The biggest problem in the era of big data is personal privacy. At present, the management of big data is not perfect, and the prevention and management measures are not in place, which leads to the disclosure of personal privacy information and finally leads to the infringement of some people’s property and personal rights and interests. In order to better protect customers’ personal privacy. First, the state should introduce relevant laws and regulations to protect personal privacy in the form of law. Second, enterprises and data processing institutions need to strictly regulate their own behavior, recognize their own responsibilities, and strictly abide by professional ethics. Crack down on the sale of personal privacy. Third, it will introduce corresponding punishment mechanisms to punish enterprises or digital processing institutions that lack integrity for their behavior of penetrating personal information. 4.3
Introduce Professional Talents
Under the background of big data, the development of Yoga industry in China urgently needs to introduce professional data processing and analysis talents, professional
84
Y. Fan
marketing team and professional teaching team. Market at present, the yoga yoga teacher, most will only teach yoga, its overall quality and comprehensive ability is not enough market demand, in training and while introduced at the same time, can be in the big data processing and analysis as well as undergraduate and graduate courses on yoga teacher, to cultivate high-quality talents conforming to modern yoga industry development.
References 1. Xuemei, D.: Discussion on common problems and innovative methods of yoga teaching in colleges and universities. Neijiang Sci. Technol. 11, 38 (2017) 2. Hou, G., Li, F., Zhou, L., Wang, Y., Wang, T.: Risk factor analysis and management status study of yoga mat. Detect. Inspection 10, 152–153 (2018) 3. Guillory: Wake Yoga _ Rapid extension from online to offline. South. Entrepreneur 07, 76– 79 (2017) 4. Yiwen, L.: Interpretation of the development of yoga in China from the perspective of social sexism. Sports World 08, 70–71 (2018) 5. Li, L.: Research on college yoga teaching under the new situation of fitness yoga promotion. J. Xichang Univ. 02, 95–99 (2018) 6. Liu, L., Liu, C., Wu, M.: Constraints on the development of China’s yoga fitness market and path choice. J. Shanghai Inst. Phys. Educ. 03, 50–54 (2018) 7. Chun, M.: Research on the curriculum of yoga in ordinary undergraduate colleges. World Phys. Educ. 04, 168 (2018) 8. Shu, W.: Analysis on innovative strategies of college yoga teaching mode. Qual. Educ. West China 02(05), 63 (2016) 9. Lingwei, W.: Current situation and Prospect of Yoga research in China _ econometric analysis based on CSSCI journal Literature. J. Kunming Metall. Coll. High. Educ. 06, 90–94 (2017) 10. Dawei, Y., Jianhua, B.: Thoughts exploration of The Development of yoga culture industry in China. J. Shenyang Inst. Phys. Educ. 06, 75–76 (2013) 11. Fang, Z.: Analysis on the diversity of contemporary yoga cultural values. J. Shandong Normal Univ. 03, 155–158 (2017) 12. Xijuan, Z.: Research on the teaching status quo and countermeasures of yoga course in higher vocational colleges. Educ. Cult. 27, 149–150 (2018)
Inheritance and Development of Traditional Patterns in Computer Aided Design Environment Yue Wang(&) College of Art and Design, Wuhan University of Science and Technology, Wuhan 430065, Hubei, China [email protected]
Abstract. With the development of electronic computers and related technologies, a brand new technology is emerging in the world, this is the computeraided design (CAD) technology. Computer-aided design (CAD) refers to the process that designers use computers and graphics equipment to carry out design work. Computer-aided design is the core technology that transforms the traditional design model. The purpose of this article is to study the inheritance of traditional patterns in the context of computer-aided design. Taking the brocade decoration as the research object, based on the small point of decoration, research and analysis of its artistic form and cultural attribution through the research methods of folk art, hoping to use this to peep into the culture behind traditional folk arts and crafts. The experimental results show that the development of the design system based on the formation principle and structure of the traditional decoration is a basic system that helps designers to complete the traditional decoration design intuitively and efficiently, and can effectively realize the research and development of the inheritance of traditional decoration, and the realization of parametric technology. It also lays a solid foundation for the personalized customization service of the traditional decoration in the future. Keywords: Computer-aided design System development decorative design Personalized customization
Traditional
1 Introduction Computer-aided design, as a research field that intersects with computer graphics, has received more and more attention from people and researchers in the traditional decoration industry [1]. The application of computer-aided design technology has penetrated into all walks of life, covering the entire process of product design and production, and has effectively promoted global production cooperation [2]. Its level of development and application is one of the signs of measuring a country’s technological and industrial modernization [3]. In the new economic system based on knowledge and driven by innovation, traditional decoration manufacturing is facing severe challenges
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 85–91, 2021. https://doi.org/10.1007/978-981-33-4572-0_13
86
Y. Wang
and opportunities, research and promotion of the application of advanced manufacturing information technology is an inevitable trend [4]. At present, the field still faces many problems to be solved: for example, there are many types of patterns, different types of patterns have different prototype structures, design rules and production methods, how to establish related models and CAD design systems for different types of patterns; computer-aided design technology Since its inception, it has gradually become an important branch in the discipline of computer applications [5]. Its emergence has freed designers from tedious design work, fully exerted the inheritance of traditional patterns, and played a huge role in shortening the design cycle and reducing costs. “In recent years, with the rapid development and continuous application of computer technology and computer network technology, computer-aided design has been in machinery! Construction! Electronics! Aerospace! Shipbuilding! Petrochemical! Civil Engineering! Metallurgy! Geology! Meteorology! Textile! Light industry! Commercial and other fields have been increasingly widely used” At the same time, in this context, it can enhance the inheritance of traditional patterns [6]. However, through reading and combing the relevant materials, it is found that there is no research on the folk art method of Zhuang brocade decoration in the field of folk art in China at present; there are various types of social scholars that can be investigated separately on Guangxi Zhuang brocade and other aspects. Although brocade research is involved, most of their research focuses on the fields of art, art history, and ethnology, but the research on the decoration is rarely seen. Most of them are attached to the research and focus on art. There is much talk about the image of the pattern. In general, in academic papers about Zhuangjin, there are many research results on Zhuangjin pattern and inheritance, which is a hot issue. However, the discussion of the cultural connotation of the Zhuang brocade pattern is not deep enough, and there is much room for research. The research on the weaving technology of the strong brocade is also very weak, and there is still a big gap between the inheritance and development of this intangible cultural heritage, and further research is needed. The innovation of this article is to further study computer-aided design drawing and production based on the scientific nature of daily traditional decoration. It analyzes the micro-design research of computer-aided design and daily traditional decoration design and production process and the macro-leading and promoting role of the development of traditional decoration. This paper further verifies the assertion that science and technology are the first productive forces, and thus establishes and affirms the significance of computer-aided design for the large-scale development of my country’s traditional ornamentation.
Inheritance and Development of Traditional Patterns
87
2 Method 2.1
Set the Drawing Area
Computer-aided design AutoCAD, as a powerful drawing software, has been widely used in the field of industrial design, but it has not been applied sufficiently in the traditional field of decorative design [7]. To carry out research on this subject, through the combination of example operations and pictures and words to explain the advantages and methods of computer-aided design AutoCAD drawing to deduce the drawing of complex decorative products, assist in the production of standard decorative patterns, and make the traditional decorative design drawing standardized and scientific. Compare the difference between traditional hand-painted traditional decorative design drawing and computer-aided design drawing, so as to strengthen the advantages of computer-aided design drawing and study the significance of computer-aided design drawing [8]. In order to accurately output the design graphics as required, before starting to draw, you should use AutoCAD’s Use Guide (UseaWiz-ard) function to set the drawing parameters correctly, the most important of which is to set the drawing area size in AutoCAD. The basis for setting the drawing area is the size and scale factor of the drawing [9]. The general rules are: The size of the drawing area (length * width) = the size of the paper size (length * width) * scale factor. 2.2
Determination of Drawing Area Values
Formally set through its corresponding dialog box. If the drawing area is not set correctly at the beginning, it can be remedied afterwards, that is, the relevant operations of executing the limits command in AutoCAD are as follows: Command: limits Rest Model space limits: ON/OFF/: Upper right corner:597,420
In the above operation, the bold part indicates the input from the keyboard by the user, the rest is the content automatically fed back by the system, and the arrow indicates that the Enter key or the right mouse button is pressed once [10]. Once the drawing area is set, AutoCAD always draws at a 1:1 ratio, that is, always enters the relevant data according to the actual size of the object, without having to consider the drawing scale and the size unit.
88
2.3
Y. Wang
The Realization Process of Decision Algorithm
Input: analysis data samples Output: Decision tree 1. Information volume of expected attribute values classified by computer data sample set classification: ∑ (1) ) 2. The information gain ratio of each attribute of the computer: a, the amount of information each possible value of the computer selection attribute: , (2) , , , b, calculate the information entropy E(A) of the attribute c, calculate the information entropy gain of this attribute ( ) Gain( A) d, calculate the split value of the attribute SplitInfo ( S, A) e, calculate the information gain ratio of this attribute GainRatio(S, A) =
(3) (4) (5)
3. Repeat 2 to calculate the gain ratio of all attribute information. 4. Compare the gain ratio of all attributes, determine the root node, and lead each branch. 5 .Rebuild a data sample set for each branch, repeat 2 to 4 to traverse all classifications [11]. 6. Return to the decision tree model.
3 Experiment 3.1
Experimental Objects
The research object of the thesis is undoubtedly the example of Zhuang brocade decoration. The research area is limited to “brilliant brocade”, and the fundamental foothold is ultimately to be implemented in “grain decoration”. First of all, the brocades here mainly include brocades of the Zhuang nationality in Guangxi, brocades of the Zhuang nationality outside Guangxi and brocades of other nationalities with brocade characteristics. Secondly, the research is on the decoration of Zhuang brocade, but not other aspects of Zhuang brocade. In other words, the research object of this article is the decoration on the folk art of Zhuangjin in and outside Guangxi. Through the computeraided design, the inheritance of the R&D brocade decoration is efficiently inherited, and the drawing of the traditional decoration is saved by AutoCAD drawing.
Inheritance and Development of Traditional Patterns
3.2
89
Experimental Method
1. Graphic and text complementation method: research on this subject, through the form of example operations and the combination of pictures and text to explain the advantages and methods of computer-aided design AutoCAD drawing to deduce the traditional decorative product drawing, assist in the production of decorative patterns, and make daily decorative design drawings Standardized and scientific. 2. Comparison and induction: compare the difference between traditional hand-painted daily decorative design drawing and computer-aided design drawing, so as to strengthen the advantages of computer-aided design drawing and study the significance of computer-aided decorative design drawing. 3. Design Practice Method: The practice of daily decoration design drawing design summarizes the theoretical experience, and the theoretical experience further guides the daily decoration design drawing practice, and from practice comes back to practice to guide practice to further deepen the role of the research theme.
4 Discuss 4.1
Traditional Decoration Under Computer-Aided Design
Such as Photoshop, AI and other computer-aided design software specific operating methods, various processing effects in the software, computer-aided design software for traditional hand-painted design effect and inspiration, etc. Through a series of comparative studies, we have come to the conclusion that computer-aided design not only provides practical theoretical basis for traditional decoration design, but also provides us with new decoration technology forms and methods. Through research, the traditional ornamental designers can be nurtured by creative thinking, and the creativity of the human brain can be exerted more. Have a positive impact on China’s traditional decorative design industry. At present, my country’s apparel market with traditional ornaments has old-fashioned decorations, few color varieties, and new packaging, which cannot meet people’s aesthetic needs. From this point of view, there is a big gap between our product design and people’s aesthetic needs in terms of innovative design and variety. Computer-aided traditional decoration design is in line with the development requirements of decoration design, and can solve some problems well. 4.2
Comparison Between Traditional Hand-Painted Ornaments and Computer-Aided Design Drawing Ornaments
The traditional hand-painted decorative design drawing is to shape the shape of the structure through the outline of the line, and express the designer’s idea of traditional decorative design through the design of the width, height and proportion of the line. Traditional expressive design methods: plan drawing, project drawing, engineering dimension drawing, etc., make effect model. The time is relatively long, the cost is high, and it is difficult to modify after the production is completed. We now use computer two-dimensional graphics technology to assist in the design of traditional
90
Y. Wang
decorative products. The implementation process of the performance is the mouse click and keyboard operation, copying, modification, etc., and it is clean, simple and efficient. The design can completely deviate from the traditional mode of paper and pen hand-drawing in the design drawing stage, and realizes the “paperless pen” design. Computer-aided design reduces the designer’s workload, has more time to think, judge, and has time to complete the task of the design itself. And for the traditional pattern hand-drawing and computer-aided design drawing related data comparison, Specific data shown in Fig. 1:
8 7 6 5 4 3 2 1 0 time
cost
Difficulty index
Designer workload
Fig. 1. Comparison of traditional hand-painted decoration and computer-aided design drawing decoration
4.3
Survey on the National Designer Experience Market in 2020
The requirements for designers who draw traditional patterns by hand are very demanding. Such designers need to have comprehensive comprehensive qualities. In addition to professional knowledge and skills, designers must continuously improve their aesthetic ability, and have extensive knowledge and experience. For ancient traditional patterns To understand and draw experience. However, in the development needs of contemporary society, there are only a handful of designers with the above capabilities, which can be seen intuitively through the following table. The specific data is shown in Table 1:
Table 1. Proportion of designers’ experience classification in the country in 2020 Designer experience level classification Proportion of national designers in 2020 Experience 15.67% Average experience 33.56% inexperienced 40.35% Inexperience 12.67%
Inheritance and Development of Traditional Patterns
91
5 Conclusion Through the research of this paper, we understand the national designer experience ratio, which intuitively reflects that there are many deficiencies in the inheritance of the contemporary society. We need to further research and development with computeraided design to make the inheritance of traditional patterns more considerable. Although the computer is the product of human thought, for a specific person, the function of computer in some aspects is still beyond our reach, because what it shows is not the degree that a person’s intelligence can achieve, but the crystallization of a group of people’s wisdom. If people can freely manipulate the various functions provided by computer programs and grasp the opportunity of creation. Combined with their own wisdom, cultural heritage, aesthetic knowledge and so on, it is bound to release the energy contained in these crystals. But computer aided design is not perfect, it can not completely replace the traditional design performance and body composition method, it is the expansion and extension of the traditional design method. Acknowledgements. This work was supported by Youth backbone project of Wuhan University of science and technology 2016xz041, 25011401 250089.
References 1. Wang, E.M.: The inheritance and improvement of “traditional ethics” in school education: an extended discussion centered on the “ethics education” in Korean middle schools in the 1970s. Taiwan J. East Asian Stud. 15(1), 115–158 (2018) 2. Mai, W.S., et al.: Implications of gene inheritance patterns on the heterosis of abdominal fat deposition in chickens. Genes 10(10), 824 (2019) 3. Constandt, B., Thibaut, E., Bosscher, V.D., et al.: Exercising in times of lockdown: an analysis of the impact of COVID-19 on levels and patterns of exercise among adults in Belgium. Int. J. Environ. Res. Public Health 17(11) (2020) 4. Wang, P., Yu, G., Wu, X., et al.: Spreading patterns of malicious information on single-lane platooned traffic in a connected environment. 34(3), 248–265 (2019) 5. Guo, F., Zhang, C., Zhang, H., et al.: Effects of hillside fields managing patterns on the vegetation and soil environment in the Loess Plateau, China. Bangladesh J. Bot. 47(3), 785– 794 (2018) 6. Do, T.: The research of environmental behaviors and usage patterns based on triangulation analysis: a case study of 29-3 Park in Da Nang City, Vietnam. World Wide Web 8(4) (2018) 7. Adonteng-Kissi, O.: Potential conflict between the rights of the child and parental expectations in traditional child-rearing patterns: resolving the tension. Child Youth Serv. Rev. 109, 104752 (2020) 8. Li, W.C., Zhang, J., Minh, T.L., et al.: Visual scan patterns reflect to human-computer interactions on processing different types of messages in the flight deck. Int. J. Ind. Ergon. 72, 54–60 (2019) 9. Zeng, Q., Jia, P., Wang, Y., et al.: The local environment regulates biogeographic patterns of soil fungal communities on the Loess Plateau. CATENA 183, 104220 (2019) 10. Mcnulty, G.F.: Some variations on a theme of irina mel’nichuk concerning the avoidability of patterns in strings of symbols. Electron. J. Comb. 25(2) (2018) 11. Morgan, L., Wren, Y.E.: A systematic review of the literature on early vocalizations and babbling patterns in young children. Commun. Disorders Q., 152574011876021 (2018)
Sybil Attack Detection Algorithm for Internet of Vehicles Security Rongxia Wang(&) Guangzhou Nanyang Polytechnic College, Guangzhou, China [email protected]
Abstract. In the context of traffic congestion and traffic accidents that seriously threaten people’s safety, vehicle networking, through the connection between vehicles, enables vehicles to effectively obtain traffic environment information, which is considered as an effective way to solve traffic safety problems. However, as a complex network, there are some security risks in the Internet of vehicles. This paper focuses on Sybil attack detection algorithm for Internet of vehicles security, aiming to make full use of vehicle perception data, communication data, user data, etc. in the Internet of vehicles to eliminate specific security risks and curb malicious behaviors. To be specific, first of all, a datadriven security architecture for Internet of vehicles is proposed. Secondly, aiming at the possible Sybil attack in ride-hailing applications, Sybil attacks in Internet of vehicles are divided into three levels according to the different capabilities of attackers, and different detection methods are designed and modeled according to their different characteristics. Finally, a solution based on mobile behavior analysis is proposed. Keywords: Internet of vehicles safety Sybil attack detection
Trust management Access control
1 Introduction At present, with the deep integration of automobile industry and information and communication technology, automobile products are developing towards intelligence and network connection [1, 2]. The information transmission process of Internet of vehicles involves massive information transmission of multiple systems. In case of a safety accident of Internet of vehicles, it will have a serious impact on personal property and personal safety, even social security and national security [3]. In recent years, there have been frequent safety accidents related to the Internet of vehicles information security, such as the large-scale remote repair caused by the function flaw of BMW Connected Drive and the recall of millions of JEEP vehicles caused by the remote cracking of the information system [4]. In the context of China’s industrial upgrading and transformation and supply-side reform, the safety of Internet of vehicles is one of the key issues in the transformation and upgrading development of the automobile industry. Only the safety of Internet of vehicles can ensure the sound development of automobile industry informatization [5].
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 92–98, 2021. https://doi.org/10.1007/978-981-33-4572-0_14
Sybil Attack Detection Algorithm
93
Since most social networks are based on graph structure, Sybil attackers often start with IP addresses, such as increasing the edge between malicious nodes and legitimate nodes in the graph, thus increasing the damage to normal network communication. Therefore, most of the current anti-Sybil attack schemes bind IP based on limited node edge number or ID (Identity) [6]. However, dynamic central less self-organizing network(MANETs) is often used in personal communication and large commercial dynamic networks [7, 8]. VANETs, developed on the basis of MANETs, is mainly composed of vehicles and roadside infrastructure. The Sybil attack did not cause severe damage in VANETs because the vehicle is usually equipped with GPS and the network is assisted by infrastructure [9]. Applying the existing security mechanism of wireless sensor network to solve the Sybil attack is a hot and difficult technical research point in resisting the Sybil attack in the current Internet of vehicles security [10]. Therefore, this paper will focus on the Sybil attack in Internet of vehicles security. The main idea is to study the Sybil attack detection algorithm for Internet of vehicles security based on the existing key management protocol and scheme.
2 Concept and Background 2.1
Sybil Attack Detection for Internet of Vehicles Security
In the Internet of Things and social networks, Sybil is a common security threat, which is usually manifested as attackers disguising or stealing false identities and causing damage to the normal operation of the network. For example, in some social platform groups, the attacker controls a large number of accounts to publish advertisements or spread rumors under the above operation. With the continuous development of Internet of vehicles, Sybil, the safety of Internet of vehicles, has gradually attracted more attention. Some scholars have studied the Sybil attack on online ride-hailing platforms, believing that there are a large number of “false crashes” in taxis, which seriously threaten passengers’ travel and become a major hidden danger for the safety of Internet of vehicles. 2.2
Problem Establishment
The establishment of the problem includes system model and attack model. As shown in Table 1, the mobile behavior-based Sybil attack detection system model for Internet of vehicles mainly includes three types of nodes, namely cloud server, base station and vehicle. The cloud server has strong computing and storage capabilities. Its main function is to ensure the normal operation of Internet of Vehicles applications and use various data to solve possible security problems in the application. The main functions of the base station include communication, edge storage and edge computing, etc. The vehicle terminal is an intelligent system installed on the vehicle, which mainly includes a perception module, a communication module, Computing module, authentication module, etc.
94
R. Wang Table 1. Model of Sybil attack detection system for internet of vehicles
Node Cloud server
Definition A cloud server is considered a fully trusted node, not a trusted node hacked and tampered with by attackers
Base station
The base station is the communication interface between the vehicle and the server
Vehicle
The vehicle is equipped with on-board positioning modules, such as Beidou, GPS, etc.
Interpretation 1. Sufficient storage and computing capacity. The main task of attack detection 2. Effectively store the moving trajectory data of all vehicles 3. Establish data mining, machine learning and other models for efficient data analysis 1. Send real-time location information uploaded by the vehicle to the server 2. Assume the function of issuing location certificates to vehicles to cope with malicious nodes deliberately reporting false locations 1. The current position coordinates can be obtained in real time by receiving satellite signals 2. The vehicle USES the on-board communication module to communicate with the base station 3. Real-time location upload and location certificate download
In terms of attack capability, Sybil attackers in Internet of Vehicles are divided into three levels according to their attack capability. The first level is common attackers. The second level is an attacker who can forge a position; The third level is the conspirator attacker. The first and second level attackers have identity set ids:{ID1, ID2…, IDn}, but the first-level attacker Si gets it first. The third level involves sharing identity information, such as password accounts, within a small group. 2.3
Leach Protocol and Rssi-Based Sybil Attack Intrusion Detection Mechanism
The data transmission phase is mainly responsible for data transmission, fusion and other technologies. In the cluster-head establishment stage, each node needs to decide whether to act as the cluster-head node of the round by probabilistic selection method. A number between 0 and 1 randomly generated by a node N is the flag value. When the flag value is less than a threshold value p(r), then the node NACTS as the cluster-head node of the epicle. The threshold value is defined as follows:
Sybil Attack Detection Algorithm
pðrÞ ¼
8 > < 0;
Other p ;n 2 G > : 1 p½r mod ð1Þ p
95
ð1Þ
However, all nodes in the network are isomorphic, and RSSIt is used to represent the sending signal strength once the network topology is formed and does not change, RSSIt = 101gPrec. As shown in the following formula, although the attenuation of the signal is uncertain, the intensity ratio between node I and node J for the received signal of cluster-head node is stable. RSSIi ¼ RSSIj
n a dj din
ð2Þ
3 Attack Detection Algorithm for Internet of Vehicles Sybil The attack detection algorithm for Internet of Vehicles (IOV) Sybil is mainly aimed at the three-level Sybil attack mentioned above and gives solutions. First of all, it is necessary to collect real-time location information of vehicle depot and make full use of the cloud server to mine the moving characteristics of vehicles and establish a model for vehicle trajectory similarity. Then, through machine learning classification model, two cars with highly similar moving behavior are found to detect the first-level attack. Secondly, for the second-level attacker, the detection first USES the base station to issue location certificate to the vehicle regularly, and then USES the server to evaluate the credibility of the vehicle’s uploaded location according to the certificate information, so as to complete the detection of the false location. Finally, a graph model is established to describe the intimate relationship between vehicles based on the historical records of vehicles appearing together near the same base station at the same time, and the community discovery algorithm is used to detect the conspirators in the network. The above three methods are to deal with the Sybil attack in the IOV to ensure the security of the Internet of vehicles information system.
4 Performance Evaluation Analysis and Countermeasure Research 4.1
Configuration of Simulation Test Parameters
This section intends to verify the effectiveness and feasibility of the algorithm performance. Matlab is used to build the simulation platform, and the machine learning algorithm is combined with the Python community toolbox igraph. The main parameters are shown in Table 2: The vehicle trajectory data in the detection is from the GAIA open source data set, which is the real data provided by a car-hailing platform and forms the moving
96
R. Wang
Table 2. Simulation parameter configuration of Sybil detection algorithm based on mobile behavior Parameter The values Normal number of vehicles 100 The total number of Sybil identities an attacker has 20 TW 180 s Vmax 85 km/h a, ai 0.5 c 0.005 T 50 s Number of conspirators 12
trajectory through the real-time reporting of the vehicle location. In addition, the vehicle needs to report information such as identity, timestamp and latitude and longitude. Three main indicators were used for testing: TPR (true positive rate), TNR (true negative rate) and ACC (Accuracy). 4.2
Performance Evaluation and Analysis
1) Sybil test based on machine learning The detection accuracy measured by ACC is shown in Fig. 1. Where, the horizontal axis is the attack intensity of an attacker, which is defined as the proportion of the number of identities used by each attacker to all the identities he owns in each attack. When the total number of false identities owned by the attacker is fixed, the number of false identities used in each attack increases with the increase of attack intensity. As this ratio decreases, the difference between attackers and non-attackers decreases, but the attack effectiveness also decreases. As this ratio increases, the probability of an attacker being detected increases. Because as the attack intensity increases, the difference between attackers and non-attackers increases, making detection easier. In addition, it can be seen that THE ACC detected by NBC is higher than that of SVM and DT, and the effect of DT algorithm is better than the other two, because NBC cannot consider the correlation between features and can only consider the vector features independently. 2) Total collusion detection based on community discovery The performance evaluation of the conspiracy attack detection model based on community discovery mainly USES ACC and TPR, and the simulation results are shown in Fig. 2. As described above, the horizontal axis is set to the intensity of the attacker’s attack, defined as the ratio of the number of accomplices the attacker helps to obtain the certificate to the total number of members of the conspiratorial community. As shown in the figure, when this value increases, BOTH ACC and TPR gradually increase, and its practical significance is that the probability of the attacker being detected gradually increases. The reason is similar to the Sybil test
Sybil Attack Detection Algorithm
DT
SVM
97
NB
1.0050 1.0000 0.9950 ACC
0.9900 0.9850 0.9800 0.9750 0.9700 0.9650 0.0000
0.1000
0.2000
0.3000
0.4000
0.5000
0.6000
0.7000
ATTACK POWER
Fig. 1. The first level Sybil detection algorithm performance simulation
based on machine learning, because when the attack intensity of an attacker increases, the closer the accomplices are, making it easier to detect.
ACC
TPR
1.200 1.000 RATE
0.800 0.600 0.400 0.200 0.000 0.000
0.100
0.200
0.300
0.400
0.500
0.600
0.700
ATTACK POWER
Fig. 2. The third level Sybil detection algorithm performance simulation
4.3
Vehicle Access Control Policy Based on Intelligent Contract
As a program running on the block chain, intelligent contract can realize the establishment of trust between multiple institutions, as well as the distributed management and execution of the program. To sum up, block chain and intelligent contracts complement each other, with the former providing a trusted execution environment for the latter and the latter extending applications for the former. Therefore, the combination of block chain and intelligent contract technology has become one of the research headquarters in recent years. As a set of commitments defined in digital form, the essence of an intelligent contract is a piece of code that runs on a block chain
98
R. Wang
network and is guaranteed to be open and reliably executed by a variety of distributed consensus mechanisms.
5 Conclusion In this paper, Sybil attack detection algorithm for Internet of Vehicles security is studied. However, in researches related to Sybil attack detection for Internet of Vehicles, the research on vehicle movement behavior can be further explored. In recent years, more and more scholars have studied the modeling of vehicle movement behavior. On the one hand, the model considers more information and can fuse multisource data such as people, cars and environment. On the other hand, in order to achieve accurate modeling, various algorithms such as hidden Markov algorithm and deep learning algorithm emerge one after another to further improve the detection accuracy of attack behavior. Acknowledgement. Fund Project1: This is the phased research result of the “Research on Security Mechanism and Key Technology Application of Internet of Vehicles” (Project No: NY2020KYYB-08) from Guangzhou Nan yang Polytechnic College. Fund Project2: This paper is the mid-stage research result of the project of “Big Data and Intelligent Computing Innovation Research Team (NY-2019CQTD-02)” from Guangzhou Nan yang Polytechnic College.
References 1. Shandong, W., Liken, Z., Lingui, G., et al.: Accurate Sybil attack detection based on finegrained physical channel information. Sensors 18(3), 878 (2018) 2. Jamshidi, M., Ranjbari, M., Esnaashari, M., et al.: A new algorithm to defend against Sybil attack in static wireless sensor networks using mobile observer sensor nodes. 43(3–4), 213– 238 (2019) 3. Singh, R., Singh, J., Singh, R.: A novel Sybil attack detection technique for wireless sensor networks. Adv. Comput. Sci. Technol. 10(2), 185–202 (2017) 4. Vasudeva, A., Sood, M.: Survey on Sybil attack defense mechanisms in wireless ad hoc networks. J. Netw. Comput. Appl. 120, 78–118 (2018) 5. Zhang, Y.Y., Shang, J., Chen, X., et al.: A self-learning detection method of Sybil attack based on LSTM for electric vehicles. Energies 13(6), 1382 (2020) 6. Bharathi, A., Priyanka, D.Y., Padmapriya, P., et al.: Robust Sybil attack detection mechanism for social networks. J. Comput. Theor. NanoSci. 15(5), 1555–1561 (2018) 7. Mishra, A.K., Tripathy, A.K., Puthal, D., et al.: Analytical model for Sybil attack phases in internet of things. IEEE Internet Things J. 6(1), 379–387 (2019) 8. Thangasamy, P., Legesse, Y.: Defending Sybil attack in MANET by modified secure AODV. Int. J. Comput. Sci. Eng. 07(6), 12725–12730 (2017) 9. Pouyan, A., Parham-Alimohammadi, M.: An effective privacy-aware Sybil attack detection scheme for secure communication in vehicular ad hoc network. Wirel. Pers. Commun. 113 (2), 34 (2020) 10. Palak, P.: Review on the various Sybil attack detection techniques in wireless sensor network. Int. J. Comput. Appl. 164(1), 40–45 (2017)
The Practical Application of Artificial Intelligence Technology in Electronic Communication Aixia Hou(&) College of Artificial Intelligence, Chongqing Creation Vocational College, Chongqing 402160, China [email protected]
Abstract. The current development of artificial intelligence has reached an extremely popular stage, and at the same time it is improving and innovating at a rapid development speed. This article discusses the practical application of artificial intelligence technology in electronic communication, hoping to bring certain promotion and guidance to the application of artificial intelligence in my country’s electronic communication. In this paper, through a questionnaire survey of 13 technology companies in our province, to understand the application of artificial intelligence technology in electronic communications, it is concluded that 75.70% of people believe that in the network intelligent security management, timely detection of existing dangerous and virus data Effective treatment; 80.20% of people believe that artificial intelligence realizes network resource sharing applications in electronic communications; 90.40% of people believe that artificial intelligence is integrated into electronic communication to achieve data collection and integration. The research in this article helps to improve the practical application of artificial intelligence in electronic communication. Keywords: Computer communication technology Artificial intelligence Practical application
Electronic information
1 Introduction At this stage, with the rapid progress of science and technology, artificial intelligence technology has gradually entered every field of people’s life, which is not only extremely popular in China, but also widely used in the whole world. Along with the further promotion of artificial intelligence technology, it is widely applied in multiple industries, increase the intelligent level of the industry, for a variety of power according to the innovation of the industry management work [1, 2]. Currently, more and more artificial intelligence products into people’s lives, provides a greatly convenient people’s life, and share in the market is also expanding. Modern electronic information technology is also developing towards intelligence and cloud computing, which enables the electronic information data processing center to complete the work of mobile phones and information processing through the cloud [3, 4]. At the same time, with the development of network integrated circuit and data center, electronic © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 99–105, 2021. https://doi.org/10.1007/978-981-33-4572-0_15
100
A. Hou
information technology is gradually developing towards virtualization and intelligence. Moreover, it can transfer the final data results to the network cloud computing platform, improve the computer’s ability to identify useful information in the deep learning of data information, and widely apply it to the information processing in the field of artificial intelligence [5, 6]. The development of information technology in the network age is changing with each passing day, and the field of artificial intelligence is one of the major directions for network development. The continuous development of computer communication technology and electronic information technology is also promoting the development of artificial intelligence technology [7]. Communication technology and electronic information technology can collect a large amount of data for calculation, and share the processing results of transmission data on the artificial intelligence platform. The field of artificial intelligence [8]. The acceleration of data information processing speed and network information transmission technology has further promoted the progress of electronic information technology and the development of network communication technology, and the development of artificial intelligence based on this technology has also accelerated. The mutual use and mutual promotion between the two greatly improve the operating efficiency and accuracy of artificial intelligence [9, 10]. This article has conducted a questionnaire survey of some technology companies in our province to understand the application of artificial intelligence technology in electronic communications. It is expected to bring certain promotion and guidance to the application of artificial intelligence in my country’s electronic communications, and to better serve people.
2 Computer Communication Technology and Electronic Information 2.1
Computer Communication Technology
Computer communication technology This technology is composed of linear information and nonlinear information, in which there is a multi-layer network structure. Computer communication technology can be used in visual image work and intelligent speech recognition work, which are all under the category of artificial intelligence. The role of establishing a connection between the two mainly relies on data analysis in computer communication technology. Through this technology, the data center can be completed quickly, and it has a high use value for the data volume and the difficult work in the work. Computer communication technology contains more advanced calculation methods. When the amount of data information is large and the data model is more complicated, computer communication technology can effectively improve the speed and performance of task execution through parallel calculation, and can ensure that it completes the calculation task quickly and accurately. In the application of technology, neural network technology can continuously improve one’s own learning ability and learning, and improve computing ability and processing ability. With the support of the neural network, the forward propagation, back propagation and daily iterative training of data can be realized. Through certain training and simulation, you
The Practical Application of Artificial Intelligence Technology
101
can obtain the data processing model you want to match. From this we can know that the blessing of computer communication technology can realize data processing in artificial intelligence through data processing. 2.2
Electronic Information Technology
Electronic information technology is an important technical method in information transmission. This technology mainly contains two technical aspects: information technology and electronic science. Electronic science in electronic information technology is a carrier technology supported by hardware, while information technology in electronic information technology is a data transmission and processing technology supported by hardware. Therefore, electronic information is essentially a means of information dissemination, based on Internet technology. The current electronic information technology is constantly developing towards intelligence and virtualization, which has laid a good foundation for artificial intelligence data mining. Based on the above, we can know that both computer communication technology and electronic information can realize data processing, which enhances our effective use of data information and provides a strong data foundation for the development of artificial intelligence.
3 Research Methods This article mainly adopts literature analysis method and questionnaire survey method. This article collects and reads related books, newspapers, relevant policy documents issued by the state, and various text materials obtained through the Internet on the application of artificial intelligence in computer communication technology and electronic information, focusing on the connotation, development history, and development of artificial intelligence. Organize and analyze the social impact and the application of electronic communication, and summarize the research. This article investigates some technology companies in our province to understand the application of artificial intelligence technology in electronic communications. In this survey, 13 technology companies in our province were selected to conduct telephone and online questionnaire surveys. 177 valid questionnaires were returned, and the questionnaire effective rate was 88.5%. Analyze the practice and application of artificial intelligence from the aspect of electronic communication, hoping to bring certain promotion and guidance to the application of artificial intelligence in my country’s electronic communication.
4 Analysis of the Practical Application of Artificial Intelligence in Electronic Communication 4.1
Survey Results and Analysis
Through a questionnaire survey of some technology companies in our province, the collected questionnaires were sorted and analyzed, as shown in Table 1 and Fig. 1.
102
A. Hou Table 1. Application of artificial intelligence in electronic communication
Practical application In network intelligent management, use artificial intelligence technology to carry out post-analysis of data Complete road analysis through various technologies and sensors in artificial intelligence driving to deal with traffic problems In the network intelligent security management, timely and effective processing of existing dangerous and virus data The use of artificial intelligence in electronic information technology can be implemented in software and hardware upgrades and maintenance Artificial intelligence applications to share network resources in an electronic communication Artificial intelligence is integrated into electronic communication to realize data collection and integration
Percentage 88.70% 67.80% 75.70% 77.40% 80.20% 90.40%
Artificial intelligence is integrated into electronic communication to realize data collection and integration Artificial intelligence applications to share network resources in an electronic communication The use of artificial intelligence in electronic information technologycan be implemented in software and hardware upgrades and maintenance In the network intelligent security management, timely and effective processing of existing dangerous and virus data Complete road analysis through various technologies and sensors in artificial intelligence driving to deal with traffic problems In network intelligent management, use artificial intelligence technology to carry out post-analysis of data 0.00% 20.00% 40.00% 60.00% 80.00% 100.00%
Percentage
Fig. 1. The application of artificial intelligence in electronic communications
It can be seen that in the survey on the practical application of artificial intelligence in electronic communication, 88.70% of people think that in the network intelligent management, the use of artificial intelligence technology to carry out the post-analysis of data; 67.80% of people believe that artificial intelligence uses various technologies and sensors to complete road analysis and deal with traffic problems in intelligent driving; 75.70% of people believe that in the intelligent network security management, the existing dangerous and virus data are effectively processed in a timely manner; 77.40% of people believe that the application of artificial intelligence in electronic information technology can be implemented in the software and hardware upgrade and maintenance process; 80.20% of people believe that artificial intelligence realizes network resource sharing applications in electronic communications; 90.40% of people
The Practical Application of Artificial Intelligence Technology
103
believe that artificial intelligence is integrated into electronic communication to achieve data collection and integration. 4.2
Practical Application of Artificial Intelligence in Electronic Communication
(1) Application in artificial intelligence driving. Artificial intelligence driving technology is to realize the intelligent driving of automobiles through computer communication technology and electronic information. At present, intelligent driving is developing in the direction of automated unmanned driving and intelligent traffic reminders. Intelligent transportation systems use computer communication technology and electronic information to drive vehicles, and use information technology to collect and analyze data, which can coordinate road driving, guide traffic, and achieve traffic assisted driving functions. Intelligent driving technology will continue to develop in the direction of unmanned automation. In unmanned driving, cars are controlled in an intelligent way, and a series of road analysis tasks are completed through various technologies and sensors and information and data computing equipment. Kind of possible traffic situation, timely response to traffic problems. In automated unmanned driving, it is necessary to ensure the high sharing of driving information and data, and it is necessary to have a powerful electromechanical system to drive the car for intelligent services. This is also the direction that this technology will focus on research and development in the future. (2) Application in network intelligent security management. In the maintenance of network information security issues, it is difficult to avoid using artificial intelligence, which can improve product security. Application in the intelligent management of network security the first protective door of network security is the detection of intrusion viruses with the help of computer communication technology Hull electronic information. Communication technology and information technology perform security analysis on data with the help of computers. When a virus with obvious threats or uncertain information is found, it will generate instant feedback and report it to the data processing center by means of “pop-up windows”. The process is vital to the maintenance of computer network security. With the help of virus identification technology, different computer viruses are recorded in files, analyzed, counted, and marked, so that viruses can be effectively avoided, computer warnings, and network defense technology can be effectively improved. In addition, this technology can be applied to intelligent firewalls. The establishment of intelligent firewalls can effectively organize the intrusion of computer viruses, reduce the amount of calculation through comprehensive processing methods, improve the processing efficiency of virus data, and ensure user safety. Artificial intelligence can mark, intercept and classify junk data with the help of recognition technology, so as to protect user rights and improve customer service experience. (3) Application in network intelligent management.
104
A. Hou
Artificial intelligence can realize the high-speed processing of information. One of the functions of artificial intelligence is to realize the rapid processing of data with extremely high accuracy. Therefore, in the process of analyzing and processing information data, the advantage of artificial intelligence is that it can achieve accurate information processing in the shortest time. When artificial intelligence is not used in electronic information technology, it is difficult for information technology to achieve high efficiency and accuracy in data processing, and the application of artificial intelligence makes up for this shortcoming. Intelligent network management requires us to conduct comprehensive analysis of various data resources to facilitate information retrieval and consultation. Therefore, the role of communication technology and electronic information in this is very important. It lays the foundation of network management through the collection and analysis of data. We then integrate the intelligent characteristics of artificial intelligence to achieve more advanced data sorting and interaction, forming a more advanced network intelligent management mode, and improving the efficiency of data analysis.
5 Conclusion In summary, computer communication technology and electronic information technology have strong data analysis and processing capabilities, and this technology is also the basis for the development of artificial intelligence applications. Artificial intelligence, which is closely related to electronic information technology, has become a typical representative of new technologies in the new era and has provided many conveniences for people’s lives. As electronic information technology will inevitably have some problems in the application process, at this stage, the application of artificial intelligence to electronic communication can more quickly carry out data processing and analysis, network security maintenance and network resource sharing advantages. On the one hand, the development of electronic communications has promoted the forward progress of artificial intelligence; On the other hand, artificial intelligence in the application of electronic communication can make up for some of the defects of electronic communication and promote the development of electronic communication. From the current actual situation, there is still a lot of room for the development of artificial intelligence. In the future, the application of artificial intelligence will surely trigger a new technological revolution.
References 1. Prucha, T.: From the editor your digital information portal. Int. J. Metalcast. 13(2), 233 (2019) 2. Boedigheimer, A., Guevara, S.: The evolution of digital information and legal resources. Inf. Today 36(2), 15–17 (2019)
The Practical Application of Artificial Intelligence Technology
105
3. Kropotov, Y.A., Proskuryakov, A.Y., Belov, A.A.: Method for forecasting changes in time series parameters in digital information management systems. Comput. Opt. 42(6), 1093– 1100 (2018) 4. Wang, Z., Fang, B.: Correction to: application of combined kernel function artificial intelligence algorithm in mobile communication network security authentication mechanism. J. Supercomput. 75(9), 5965 (2019) 5. Gaggioli, A.: Artificial intelligence: the future of cybertherapy? Cyberpsychol. Behav. Soc. Netw. 20(6), 402 (2017) 6. Upadhyay, A.K., Khandelwal, K.: Artificial intelligence-based training learning from application. Dev. Learn. Organ. 33(2), 20–23 (2019) 7. Gil, E., Díaz, K.M.: Electronic health record in bolivia and ICT: a perspective for latin America. Int. J. Interact. Multimedia Artif. Intell. 4(4), 96 (2017) 8. Carlo, B., Marco, C., Gianluigi, V., et al.: Digital information asset evaluation. ACM SIGMIS Database DATABASE Adv. Inf. Syst. 49(3), 19–33 (2018) 9. Garcia, D.F., Perez, A.E., Moncayo, H., et al.: Spacecraft heath monitoring using a biomimetic fault diagnosis scheme. J. Aerosp. Comput. Inf. Commun. 15(7), 396–413 (2018) 10. Paschali, K., Tsakona, A., Tsolis, D., et al.: Steps that lead to the diagnosis of thyroid cancer: application of data flow diagram. IFIP Adv. Inf. Commun. Technol. 382(3), 56–65 (2017)
Application Research of 3D Reconstruction of Auxiliary Medical Image Based on Computer Chao Wang and Xuejiang Ran(&) School of Computer and Information, Inner Mongolia Medical University, Hohhot, Inner Mongolia, China [email protected]
Abstract. With the national attention and development of information technology, computer information technology products have been gradually integrated into and applied to our lives. With the rapid development of medical technology, computer information technology and medical imaging equipment, medical image three-dimensional reconstruction technology is widely used in clinical diagnosis, surgical simulation, anatomy, B-ultrasound and other medical fields. Therefore, three-dimensional reconstruction technology of medical image has important application value and significance in medicine. Based on this, the purpose of this study is to study the application technology of computer-based three-dimensional reconstruction of medical images. In this paper, interpolation method and sift algorithm are used to study the experiment, and the application of medical image 3D reconstruction technology is studied. The experimental results show that the image interpolation method and sift algorithm can improve the brightness continuity and visual effect of the sequence images, so the information in the images can be obtained better when observing the images in the experiment. The computer three-dimensional reconstruction technology has a very important role in the application of auxiliary medical images. Keywords: Computer technology reconstruction
Image interpolation Sift 3D
1 Introduction In order to improve the accuracy of the two-dimensional image of medical diagnosis and the safety and scientificity of planning treatment in the future, this paper proposes to transform the two-dimensional sectional image sequence into a three-dimensional image with intuitive and three-dimensional effect. The three-dimensional image can well show the three-dimensional structure and shape of human organs, so that doctors can get those who use traditional medicine. The detailed information of anatomy and lesion site that cannot be obtained by imaging technology is convenient for doctors to diagnose and treat the lesion accurately and effectively. Therefore, medical scientists and scientists are competing to study this method, and put forward the medical image three-dimensional reconstruction and visualization technology, and once this technology is proposed, it has been a lot of research and application. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 106–112, 2021. https://doi.org/10.1007/978-981-33-4572-0_16
Application Research of 3D Reconstruction
107
AI et al. Proposed the three-dimensional reconstruction technology based on Kinect. Through the research, the three-dimensional reconstruction technology based on Kinect has been relatively mature and obvious improvement, and the improved reconstruction algorithm has obvious improvement in reconstruction quality, frame rate and running time. On this basis, Kinect fusion technology is introduced, which provides a solution for high-precision and fast 3D reconstruction based on consumer devices in the future [1]. Wang et al. Proposed a three-dimensional reconstruction technology based on CT images. The three-dimensional reconstruction technology based on CT images makes use of the characteristics of human visual system to display the three-dimensional morphology of objects and organs. After three-dimensional reconstruction, CT images can assist doctors to analyze and display lesions and surrounding tissues, thus providing some anatomical structure information that cannot be obtained by traditional means [2]. As a result, doctors have realized the computer simulation and operation plan of orthopedic surgery and radiotherapy, which greatly improves the accuracy and scientificity of important diagnosis in medical field. The main applications of three-dimensional reconstruction technology based on CT images in medicine include: Simulation of general surgical operation; location and measurement of tumor detection; Determination of the best scheme of cosmetic surgery and repair; calculation of optimal location and dose of radiation source in radiotherapy; and its application in medical teaching. This research mainly focuses on three-dimensional reconstruction of medical images. The experimental data are obtained from the images obtained by threedimensional reconstruction technology. By using computer graphics and image processing technology, a two-dimensional image is converted to three-dimensional, and then plane to three-dimensional process. Finally, a series of 2D slices are transformed into 3D graphics and screen images by using computer technology. Because the medical image three-dimensional reconstruction technology has a very good application in many medical fields, so the medical image three-dimensional reconstruction technology has a very important significance in the biological field [3].
2 Three Dimensional Reconstruction of Medical Images 2.1
Three Dimensional Reconstruction
3D reconstruction: it refers to the establishment of a suitable three-dimensional mathematical model for the objective things in reality. In the process of establishing the model, technicians process the things that need to establish the mathematical model in the electronic network environment, and reconstruct the image according to the multiple attempts in the computer. In the process of image reconstruction, the orientation of the camera is first carried out. After calibration, the relationship between the camera’s image coordinates and the world coordinate system is calculated. Finally, the basis of image reconstruction property is analyzed. The 3D information is reconstructed by using the information of 2D image. Therefore, three-dimensional reconstruction technology is a key technology to establish a virtual reality world to express things in the objective world through computer technology.
108
2.2
C. Wang and X. Ran
Interpolation Technology Between Images
In the process of medical image processing, image interpolation technology can extract multi-layer data from existing medical cross-sectional images as original data for 3D reconstruction. Generally speaking, the vertical resolution is small, and there is no necessary information between layers, so it is necessary to interpolate each layer image to obtain accurate and clear image. 2.3
Description of Interpolation Between Images
In modern times, interpolation is a common tool for data processing and function tabulation, and also the basis for deriving many other numerical methods. This is continuous data interpolation between discrete data sets [4–6]. Through so many years of research, image interpolation can be said to be the most basic and commonly used method in mathematical methods. It can estimate the function value of other places by the value of the function at a limited number of points, that is to say, a complete mathematical description can be obtained through limited data. The interpolation function smoothes the sampling value and recovers the information lost during the sampling process. Therefore, interpolation can be regarded as the inverse process of sampling. Another image interpolation process is accompanied by computer graphics and image processing science and technology. The so-called image interpolation refers to the regeneration process of image data that is to regenerate the image data with higher resolution from the original image data with low resolution. As mentioned above, a low-resolution image is converted into another high-resolution image by using image interpolation. This kind of image interpolation is regarded by scientists as “image interpolation”, such as image magnification applied in microscope. If multiple images are regenerated from each other. This kind of image interpolation is regarded by scientists as “interpolation between images”, such as interpolation between slices of image sequence. The direct result of image interpolation is that the original image has fewer pixels, so the rough image becomes an image marked by more pixels: fine image [7, 8]. The image interpolation method is an indispensable experimental method in the process of image resampling, and image interpolation is widely used to improve image quality and lossy compression in the process of image resampling. On the other hand, image interpolation has a special position in medical image processing, because in the process of medical image processing, image interpolation method is needed for serial slice images, which is the research object of reconstructing three-dimensional anatomical structure. However, since the distance between slices is usually larger than the distance between pixels on the slice, it is not isotropic, so it is difficult to directly use serial slices for 3D reconstruction [9]. 2.4
SIFT Algorithm
SIFT algorithm extracts the information contained in the image as a local feature vector set, obtains sub-pixel features by interpolation, and then accurately locates the feature points, and then matches and corrects the feature points according to the obtained SIFT
Application Research of 3D Reconstruction
109
feature vectors. The research shows that the feature vectors obtained by the algorithm are robust to rotation, motion and scale changes, and maintain certain stability to brightness changes, which takes a long time [10]. The L (x, y, r) operator can be obtained by constructing the scale space, detecting the extreme points, and obtaining the scale invariant features. The L (x, y, r) operator can be obtained by Gaussian smoothing of the original image I (x, y, r) Lðx; y; rÞ ¼ Gðx; y; rÞ jðx; yÞ 1 ð x xi Þ 2 þ ð y yi Þ 2 exp Gðxi ; yi ; rÞ ¼ 2pr2 2r2
ð1Þ ! ð2Þ
Where (x, y) is the coordinate value of pixels in space, r is the scale size. By adjusting the scale parameters, images with different clarity can be obtained. When the scale is reduced, the detailed information of the image can be displayed. Log space is obtained, where LOG operator is calculated as follows: r2 G ¼
@2G @2G þ 2 @x2 @y
LOG ðx; y; rÞ ¼ r2 r2 G
Gðx; y; krÞ Gðx; y; rÞ r2 ðk 1Þ
ð3Þ ð4Þ
3 Subjects The object of this experiment is to reconstruct the three-dimensional image of human arm. In this experiment, the research team participated in, scanning their arms to obtain experimental data. This experiment mainly includes three groups: experimental group, control group and analysis group. In the experimental group, the researchers collect the experimental data of the twodimensional image sequence obtained by the experimental personnel scanning the arm, and then reconstruct the two-dimensional images according to the multiple attempts to obtain the three-dimensional images of the arm, so as to obtain the three-dimensional information of the arm, so as to collect and analyze the following experimental conclusions. The contrast group is to compare the three-dimensional image information data obtained by the experimental group with the image data obtained by the traditional medical imaging technology, so as to find out the differences between the image data, or find out the advantages of three-dimensional reconstruction technology for medical imaging based on big data. The analysis group made a comprehensive and detailed analysis of the information and data obtained in the whole experimental process, and drew the most accurate and rigorous conclusion.
110
C. Wang and X. Ran
4 Discussion 4.1
Human Arm Experiment
In this human arm reconstruction experiment, the team used the l14-5/38 linear array probe compatible with Sonix RP system to scan a 12 cm human body on the forearm. Three groups of experiments were carried out under different scanning speeds, and the image acquisition rate of video capture card was 25 Hz. As shown in Table 1, the first group of image data was collected at a relatively slow scanning speed, and the acquisition time was 20 s. This group of data included 380 B-ultrasound images, and the region of interest (ROI) obtained by cutting was 515 478; the second group of experimental data acquisition speed was slightly faster than the first group, the acquisition time was 10 s, this group of data included 205 images, the ROI of each image was 518 480; the third group of experimental scanning speed There are 75 two-dimensional images with ROI of 518 478, which is the original twodimensional ultrasound image of human arm [11]. Table 1. Number of images processed at different scan intervals and time required
Slow scan experiment Medium speed scanning experiment Rapid scanning experiment
4.2
Acquisition time 20 10 5
Image frame number 380 205 75
Perceptual region size 515 478 518 480 518 478
Visualization Analysis of 3D Reconstruction of Images
In this experiment, VNN, DW, GWM and Bezier interpolation algorithms are used to reconstruct two groups of B-ultrasound image data collected, and the reconstruction errors of these methods are statistically analyzed. Figure 1 shows the average error and square error of the three-dimensional reconstruction error of the first group of experimental data, where the percentage in the first column represents the proportion of pixels removed from a certain frame of the original image to the total number of pixels in the image. For example, 0% means reconstructing the original ultrasound image without removing pixels; 50% represents removing half of the pixels in a frame. These removed pixels are randomly selected, and 300% means that three consecutive twodimensional ultrasound images are removed. In this group of experimental data, the image acquisition speed is slow, and the interval between the original image sequences is small, so the neighborhood pixels of voxels reflect the voxel gray information more accurately, and the weighted average or median value method of these pixel points is more accurate than the Bessel curve interpolation method [12].
Application Research of 3D Reconstruction
111
Fig. 1. Comparison chart of average reconstruction error of the first group of experiments
Fig. 2. Comparison chart of average reconstruction error of the second group of experiments
The second group of experimental data was reconstructed by VNN, DW, Bezier and GWM 2. The average interpolation error and difference of the above four reconstruction algorithms were tested (Fig. 2). As can be seen from the above figure, in 0% of the experiments, the error of VNN, Bezier and DW algorithms is 0. In 100% tests, the interpolation error of DW method is the smallest, followed by GWM method, and the interpolation error of VNN algorithm is the largest: in 100%–300% of the test experiments, the reconstruction error of Bezier interpolation algorithm is still larger than that of DW and GWM methods, but less than VNN algorithm; when the removed ultrasonic image increases to 500% and 700% The interpolation error of Bezier interpolation algorithm is smaller than that of the other three algorithms, that is, the reconstruction effect is the best in sparse condition.
112
C. Wang and X. Ran
5 Conclusions This paper mainly studies the three-dimensional reconstruction technology of medical images. Through the research of experimental data and a large number of medical clinical research data found by our research team, it shows that the application of threedimensional reconstruction technology in medical images is very meaningful. It can transform the two-dimensional image sequence obtained by traditional medical imaging equipment into a more intuitive three-dimensional image, so that doctors can get the accurate information of lesions in medical diagnosis, and It can effectively improve the medical professional development space in medical image, let the medical image three-dimensional reconstruction technology better serve the people, and provide favorable protection for people’s health.
References 1. Duan, X., Chen, D., Wang, J., et al.: Visual three-dimensional reconstruction of aortic dissection based on medical CT images. Int. J. Digital Multimedia Broadcast. 20(1), 1–8 (2017) 2. Lv, S., Chen, Y., Li, Z., et al. application of time-frequency domain transform to threedimensional interpolation of medical images. J. Comput. Biol. 7(1), 00–38 (2017) 3. Mun, D., Kim, B.C.: Three-dimensional solid reconstruction of a human bone from CT images using interpolation with triangular Bézier patches. J. Mech. Sci. Technol. 31(8), 3875–3886 (2017) 4. Pichat, J., Iglesias, J.E., Yousry, T., et al.: A survey of methods for 3D histology reconstruction. Med. Image Anal. 21(1), 73–105 (2018) 5. Zorzal, E.R., Sousa, M., Mendes, D., et al.: Anatomy studio: a tool for virtual dissection through augmented 3D reconstruction. Comput. Graph. 85(1), 74–84 (2019) 6. Liu, S., Zhang, D., Song, Y., et al.: Automated 3-D neuron tracing with precise branch erasing and confidence controlled back tracking. IEEE Trans. Med. Imaging 37(11), 2441– 2452 (2018) 7. Zhang, W., Song, Y., Chen, Y., et al.: Limited-range few-view CT: using historical images for ROI reconstruction in solitary lung nodules follow-up examination. IEEE Trans. Med. Imaging 11(12), 1 (2017) 8. Nam, T.K., Park, Y.S., Byun, J.S., et al.: Use of three-dimensional curved-multiplanar reconstruction images for sylvian dissection in microsurgery of middle cerebral artery aneurysms. Yons Med. J. 58(1) (2017) 9. Wang, H., Chen, F., Zhang, Y., et al.: Three-dimensional reconstruction of cervical CT vs ultrasound for estimating residual thyroid volume. Nan fang yi ke da xue xue bao = J. South. Med. Univ. 39(3), 373–376 (2019) 10. Guerriero, L., Quero, G., Diana, M., et al.: Virtual reality exploration and planning for precision colorectal surgery. Diseases Colon Rectum 61(6), 719–723 (2018) 11. Hasan, H.A.: Three dimensional computed tomography morphometric analysis of the orbit in Iraqi population. Int. Med. J. 2017 24(1), 147–149 (1994) 12. Shi, J., Udayakumar, T.S., Wang, Z., et al.: Optical molecular imaging-guided radiation therapy part 2: Integrated X-ray and fluorescence molecular tomography. Med. Phys. 44(9), 1 (2017)
Service Quality Research on Media Gateway Xiaozhu Wang(&) and Xiaoxue Lu Department of Information Technology and Business Management, Dalian Neusoft University of Information, Dalian, Liaoning, China [email protected]
Abstract. In this paper, the traditional mobile network architecture and softswitch architecture are compared. This paper points out the advantages of the IP based softswitch network architecture and the problems to be solved. Secondly, it introduces three models of IP QoS. Finally, the QoS characteristics supported by media gateway are introduced. Keywords: Best effort services service
Differentiated services model Integrated
1 Mobile Core Network Base on Ip Bearer 1.1
Classical Mobile Core Network Architecture
In the traditional mobile communication network, MSc is responsible for call control and bearer control, and BSC is responsible for the management of wireless interface. ISUP signaling is used to transfer call control information and bearer control information between MSC. Voice service between MSC is carried by TDM transmission, and PCM code is used for voice coding. 1.2
Soft-switch Architecture based on IP bearer
Soft-switch technology is the core technology of NGN evolution. The soft-switch core based on all IP Bearer adopts the architecture of separating control and bearer. In traditional mobile network, MSC, which realizes control and bearer, is replaced by MSC server (Msc-s) and media gateway (MGW). Msc-s is used to realize call control and MGW realizes bearer establishment control [1]. The call control information transmitted between Msc-s no longer contains the information related to the bearer. The control message is transmitted by BICC signaling between Msc-s, and MGW is controlled by Msc-s through GCP signaling. The bearer between MGW is completed by IPBCP. The IPBCP message is encapsulated in GCP and BICC messages and delivered to the other MGW. The voice services between MGW, GCP signaling between Msc-s and MGW, and BICC signaling between Msc-s are carried through IP; the signaling between Msc-s and other nodes can be carried by IP or TDM (Fig. 1).
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 113–119, 2021. https://doi.org/10.1007/978-981-33-4572-0_17
114
X. Wang and X. Lu
HLR
MAP BICC M-MGWS
SCP/CMN GCP BSC
MSC-S IP Backbone
M-MGWS
M-MGWS
BSC
RTP/UDP/IP
Fig. 1. mobile soft-switch architecture based on IP bearer
1.3
Advantages and Disadvantage of IP Bearer
The advantages of IP based soft-switch network over traditional mobile network lie in the following aspects: It can improve the voice quality. In the traditional network, the voice of the primary and passive users needs to be coded and decoded in BSC. In the network based on IP Bearer, by enabling some new features, additional codec conversion operations can be reduced, thus improving the voice quality [2]. In the traditional network, PCM coding is used for voice, and each voice needs to occupy 64 K bandwidth. In the soft switch network based on IP Bearer, the core network can transmit compressed voice data, thus achieving the purpose of saving transmission bandwidth. IP transmission can save 70% of the transmission bandwidth; [3]. In the IP based soft-switch network, the number of codecs per voice is reduced, so the demand for codec equipment is also reduced. Using the traditional TDM technology, voice and signaling are carried in the way of exclusive channel, so the quality of service can be guaranteed. In all IP based Softswitch networks, signaling and voice services share the network bandwidth of IP Bearer Network. The traditional IP network adopts the best effort QoS mode, that is, it does not guarantee any quality of service. This kind of network is difficult to meet the needs of telecommunication network. Therefore, measures must be taken to ensure certain service quality and reliability in the application of IP Bearer Network in softswitch network.
2 IP Qos QoS (quality of service) is a comprehensive index, which is used to measure the satisfaction degree of using a service. It can be described by a series of quantitative parameters, including bandwidth, delay, delay variation, loss ratio and error ratio.
Service Quality Research on Media Gateway
2.1
115
Best-Effort Model
At first, IP network is only used to transmit simple data service, it adopts connectionless service. The standard IP network adopts the best effort service model, which can not guarantee the service specific quality. This kind of IP network allows the structure of client host to be more complex, while the structure of network end can be relatively simple. Because the Internet needs to support its own rapid development, such structure division is beneficial. When more and more hosts are connected together, the demand for network services will eventually exceed the capacity of the network, but the service will not stop, resulting in the gradual deterioration of network performance, resulting in the change of transmission delay (jitter), and even packet loss, but it will not affect the commonly used Internet applications, while other applications can not adapt to the service with delay. Transmission delay brings problems to applications with real-time requirements, such as VoIP. With the continuous growth of Internet scale, a large number of real-time services appear on IP network. Because real-time services are sensitive to the characteristics of network transmission delay and delay jitter, the emergence of these real-time services exposes the serious defect of IP network technology that there is no QoS guarantee [4]. 2.2
Integrated Services Model
In order to guarantee the QoS of IP network, IETF first proposes an integrated service model, which establishes paths and reserves resources for the receiving end before sending data to realize end-to-end QoS. The integrated service model takes RSVP (Resource Reservation Protocol) as the main signaling protocol. Through RSVP, users can apply for resource reservation for each service flow (or connection). The resources to be reserved may include the size of buffer and bandwidth. This kind of reservation needs to be carried out at every hop in the path. All routers along the path, including the core router, must maintain soft state for RSVP data flow. In this way, the end-to-end QoS guarantee is realized. The advantage of integrated service model is that it has good QoS guarantee, but its disadvantage is poor scalability. This is mainly manifested in the following aspects: Firstly, in the integrated service model, the reservation state is proportional to the number of traffic flows, which makes the burden of routers in each node of the network heavier with the expansion of the network and the increase of traffic flow, but the processing capacity of routers is limited. Second, the integrated service model needs to reserve resources from end to end. IP network is an open network, which is managed by many operators. The application of integrated service model requires every node in the network to support RSVP and maintain the “soft state” information of routing and resources. However, any network operator does not want its own network resources to be controlled by other operators System. Third, the integrated service model has higher requirements for routers. Due to the need for end-to-end resource reservation, all routers from sender to receiver must support the implemented signaling protocol. It is because of these shortcomings that the integrated service model can only be used in small networks, it is difficult to extend to the entire Internet.
116
2.3
X. Wang and X. Lu
Differentiated Services Model
Because it is difficult to implement the integrated service model on the existing network, especially for large Wan, IETF develops the differential service model. Its basic idea is that the data streams can be classified according to the pre-determined rules, so that the multi application data streams can be integrated into a limited number of data stream levels [5]. Differential service will effectively replace the use of RSVP across a wide range. The main members of differential service area are: core router, edge router and resource controller. In differential service, the edge equipment of the network classifies each packet and marks the differential service (DS) domain. DS domain is used to carry the service requirement information of IP packets. In the core node of the network, the router selects the forwarding processing corresponding to the code point according to the DS code point on the packet head. The service differentiation structure of IP Qos uses the service type (TOS) field in the IPv4 header, and renames the 8-bit TOS field as DS field, of which 6 bits are available for current use and the remaining 2 bits are for future use. This field can be defined according to the pre-determined rules, so that the downstream node can obtain enough information to process the packets arriving at the input port by recognizing this field and forwarding them to the next hop router correctly. The differential service model simplifies the structure of nodes in the network, which is much more scalable than the integrated service. Differential service model is also called soft QoS or cos (Fig. 2).
ECT
EC
DSCP
Ver
HLEN
ECN
TOS
Total Length
Identification TTL
Flag Protocol
Flag Offset Header checksum
Source IP Address Destination IP Address
Fig. 2. IP header
One of the advantages of the differential service model is that it has good scalability. The differential service domain field only specifies a limited number of business levels. The amount of state information is proportional to the business level, not to the number of flows. The other is easy to implement. Complex classification, labeling,
Service Quality Research on Media Gateway
117
control and shaping operations are needed only at the boundary of the network. Therefore, the differential service model has a good scalability; [6] the disadvantage of differential service is that it still adopts the packet forwarding mode of hop by hop routing, and the end-to-end QoS support is insufficient. 2.4
IP Qos Summary
Integrated service model and differential service model are two main QoS models at present. The advantage of integrated service model is that it can realize end-to-end QoS. However, the integrated service model is difficult to be applied in large-scale networks due to its poor scalability. Differential service model has good expansibility and is easy to implement. The disadvantage of differential service model is that it can not realize end-to-end QoS. In practical application, differential service and integrated service can also be used together. One method is to cover the integrated service model on the service network of the differential service model, and the RSVP signaling is completely transparent through the network of the differential service model. Devices at the edge of both networks process RSVP messages and provide admission control based on the availability of appropriate resources in the differential service network [7]. Another method is simple parallel processing. Each node in the differential service model network may also have RSVP function. Some strategies are adopted to decide whether the message uses integrated service model or differential service model.
3 Qos Control of Media Gateway In order to ensure reliable transmission of signaling and voice data over IP network, and overcome the shortage of IP network which can not guarantee the quality of service, MGW provides some characteristics to avoid congestion and ensure quality of service. The main functions of MGW in terms of service quality include differential service support, static access control, dynamic access control and jitter compensation. 3.1
Measurement Based Admission Control
Measurement based admission control (MBAC) is a feature that determines whether a call is allowed or not by dynamically detecting the network state. MGW monitors the network information flow and collects the relevant statistical data at the IP level to obtain the real-time performance information of the network, so as to make decisions on whether to allow business connection [8]. If the media gateway detects that the packet loss of a site is serious, it will block the connection requests to the site until the packet loss is reduced to a specific threshold. The information to be counted by MGW includes the number of IP messages marked with ECN, the number of RTP messages received, the number of RTP messages not received and the number of RTP messages lost (Fig. 3). MGW also supports ECN (explicit congestion notification). In the TOS field of IP header, the upper 6 bits are defined as DSCP, the lower two bits are defined as ECN, and ECN is divided into two fields, ect and EC. Ect indicates whether the node supports
118
X. Wang and X. Lu
BICC MSC-S Packet loss from Site A acceptable?
MSC-S
IPBCP
M-MGw SITE A
SITE B
M-MGw M-MGw
Fig. 3. Measurement based admission control
ect function, and EC indicates whether the network is congested. MGW also supports this feature. When the router in the station finds that the network is congested, it will mark the DSCP field in the header of IP message sent to MGW to indicate that the network is congested. MGW can identify these marks and make statistics, which is one of the basis for judging network congestion. 3.2
Static Access Control
MGW’s static access control can limit the traffic allowed to enter the network when the IP network is overloaded, so as to prevent network congestion and ensure the quality of service of all allowed services. For IP traffic of media streams, MGW records the bandwidth of those media stream channels that have been allocated. The bandwidth allocated by the media gateway for each media stream depends on the codec selected. Network operators can configure the maximum allowable bandwidth on MGW. Once the bandwidth of the allocated media stream channel reaches the preset threshold, all calls will be rejected, except for emergency calls. This is the static access control of media gateway [9]. The CE router inside the site is connected with the IP backbone router, and the bandwidth between them is limited. With the help of static access control, we can limit the traffic from the media gateway to the CE router, so as to avoid the congestion of the uplink devices. MGW collects the currently allocated bandwidth through the performance measurement counter. If the bandwidth utilization reaches 80% of the threshold, MGW will give a prompt, and the system will run out of bandwidth. If the bandwidth reaches 100%, the system will start to reject new traffic. 3.3
Support of QoS
MGW supports differential service model. In the network of differential service model, MGW is in the edge position, so MGW can mark the DSCP field of IP message according to different service types. If the IP Bearer Network is deployed with differential service, each node in the network will forward the message according to the service level corresponding to the DSCP field.
Service Quality Research on Media Gateway
119
MGW has only one DSCP marking parameter related to DSCP. This parameter is defined in mgw application, indicating the DSCP value marked by IP message after sending. The range is 0–63. For IP based media streaming services, the default DSCP tag value of IP packets is 46, and the corresponding forwarding level is fast forwarding ef (expedited forwarding). 3.4
Jitter
The queuing of packet packets in network nodes is random. Two adjacent packets may pass through different paths in the network, which leads to different propagation delay of packets. In packet networks, the difference of arrival time intervals of data packets due to the variation of various delay is called jitter. The causes of jitter may be the queuing delay, variable packet size, intermediate links and so on. Jitter has no effect on general data services, but it has a great impact on real-time voice services [10]. The common method of jitter compensation is buffering at the receiving end. The time interval of data packets entering the buffer may be different, but the time interval of packets leaving the buffer is the same after buffering. Although this processing method may increase the delay, it is necessary to eliminate the impact of jitter.
References 1. Bi, Y., Yuan, K., Feng, D., Xing, L., Li, Y., Wang, H., Yu, D., Xue, T., Jin, C., Qin, W., Tian, J.: Disrupted inter-hemispheric functional and structural coupling in Internet addiction adolescents. Psychiatry Res. Neuroimaging 234(2), 157–163 (2015) 2. Nakayama, R., Takaya, Y., Akagi, T., Nakagawa, K., Ito, H.: TCTAP A-075 identification of high-risk patent foramen ovale associated with cryptogenic stroke. J. Am. Coll. Cardiol. 73(15), 38–39 (2019) 3. Jin, C., Gao, C., Chen, C., Ma, S., Netra, R., Wang, Y., Zhang, M., Li, D.: A preliminary study of the dysregulation of the resting networks in first-episode medication-naive adolescent depression. Neurosci. Lett. 503(2), 105–109 (2011) 4. Xing, L., Yuan, K., Bi, Y., Yin, J., Cai, C., Feng, D., Li, Y., Song, M., Wang, H., Yu, D., Xue, T., Jin, C., Qin, W., Tian, J.: Reduced fiber integrity and cognitive control in adolescents with internet gaming disorder. Brain Res. 1586, 109–117 (2014) 5. Zhang, C., Zhu, T., Chen, Y., Xu, Y.E.: Loss of preimplantation embryo resulting from a Pum1 gene trap mutation. Biochem. Biophys. Res. Commun. 462(1), 8–13 (2015) 6. Zhou, X., Zhou, Y., Liu, J., Song, S., Sun, J., Zhu, G., Gong, H., Wang, L., Wu, C., Li, M.: Study on the pollution characteristics and emission factors of PCDD/Fs from disperse dye production in China. Chemosphere 228, 328–334 (2019) 7. Ma, G., Dou, Y., Dang, S., Yu, N., Guo, Y., Yang, C., Lu, S., Han, D., Jin, C.: Influence of monoenergetic images at different energy levels in dual-energy spectral CT on the accuracy of computer-aided detection for pulmonary embolism. Acad. Radiol. 26(7), 967–973 (2019) 8. Bo, Q., Chang, L., Chenwang, J., Yi, Y., Peilong, C., Liang, Y., Xuemin, L., Yi, L.: Serious bile cast formation after transarterial chemoembolization for liver cancer. Eur. J. Radiol. Extra. 68(3), e121–e123 (2008) 9. Qingbai, S.: Construction and promotion of home media gateway. Radio Telev. Technol. 43 (001), 67–71 (2016) 10. Yuanwen, D.: A platform software solution for media gateway. Today Electr. (005), 44–46 (2012)
A Review of Research on Module Division Methods Based on Different Perspectives Yanqiu Xiao, Qiongpei Xia(&), Guangzhen Cui, Xianchao Yang, and Zhen Zhang College of Mechanical and Electrical Engineering, Zhengzhou University of Light Industry, Zhengzhou, China [email protected]
Abstract. Module division is the key and foundation of modular design. Module division should be based on the basic principle of “strong aggregation within modules and weak coupling between modules”. In product development and design, reasonable and effective module division is of great significance for designers to seek more new functions and new solutions. The rationality of module division largely affects product performance and cost. This article explains the significance of module division in mechanical products, and classifies and summarizes the existing division methods and technologies. According to the characteristics of the product, the method of module division is reviewed from different perspectives such as the product’s function and structure, green design attributes, and different stages of the life cycle. Through the research and analysis of these methods, some prospects are finally made. Keywords: Modular design
Module division Green design
1 Introduction With the continuous development and progress of social science and technology, market demand has become more and more diverse. As a company, it is necessary to speed up product development as much as possible, and to ensure that customers’ requirements for product quality, low cost, and individualization are met [1]. Modular design method is an innovative design method. Through the reasonable module division of the product, the number and types of modules are used as few as possible, and a variety of products can be obtained quickly through different combinations, so as to meet the different performance of customers demand. The combined common modules can be mass-produced and managed to form a module library, which reduces the cost of production and management. Module division is an important research content in modular design methods.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 120–126, 2021. https://doi.org/10.1007/978-981-33-4572-0_18
A Review of Research on Module Division Methods
121
2 Overview of Related Research on the Division of Mechanical Product Modules As a key technology of modular design [2], module division has received extensive attention from domestic scholars in recent years, and the method of module division has been studied from different perspectives of products. The literature studied the method of dividing the module into the product life cycle. Literature considers the green design attributes of the product when dividing the product into modules. Literature proposed a multi-level module division method based on product functional structure. This article summarizes the module division methods from different perspectives as follows. 2.1
Module Division Method for Product Functional Structure
Oman, M et al. [3] used the idea of modularity, optimized the functional structure of the product family based on different working conditions, and decomposed the product family problem into sub-problems. Gong Zhibing et al. established a fuzzy similarity matrix of product parts to express the forming process of modules based on the analysis of product functional structure, and used compound k to divide modules. Cheng et al. proposed a module division method based on DSM, analyzed the correlation between the functional structure of the parts, and obtained the correlation matrix after calculating the correlation degree. Finally, the matrix was solved and divided to obtain the modules. Liu Jiangang et al. used DSM to model product structure, and used genetic algorithm to perform cluster analysis on product structure. Literature [4] based on product function-structure-principle (behavior), based on the analysis of product function structure and behavior, axiomatic design theory and behavior compatibility principle are used to continuously improve product function structure. Decoupling, and finally decomposing to obtain units with independent functional structure and compatible behaviors, and establish the FPBS decomposition model of the product. The literature improved the module division based on the functional structure of the product, and proposed an innovatively designed module division method that fully considered the two factors of function and structure. Literature established the functional structure model of the product based on the similar dispersion of the product, and used the module division method proposed by Stone et al. to identify the common modules. The functional structure-oriented module division method is the basis of module division. In order to achieve multi-objective module division, other influencing factors of the product must be considered. Each stage of the product life cycle has a great influence on the product, so module division should also be considered. 2.2
Module Division Method for Product Life Cycle
The module division method oriented to the product life cycle, which takes into account the product function structure and the attributes of each stage in the product life cycle process, as shown in Fig. 1.
122
Y. Xiao et al.
Repairable
Reusable
Recyclable
Product life cycle design
Removable
Remanufacturing
Fig. 1. Schematic diagram of each stage of the product life cycle
Tao Tang et al. introduced the impact of different stages in the life cycle into the product’s module division, which can simultaneously guarantee the product’s function and environmental attributes. Guo Wei et al. proposed a life cycle-oriented green module division method, which proposed eight division criteria based on the product life cycle, and constructed an interaction matrix between modules. Su Ming et al. proposed an innovative module division method based on the division of product functions, which optimized the division results from the perspective of life cycle and analyzed the interaction effects of various stages on parts. Chen Bing et al. [5] based on the product’s functional structure and material attributes, combined with the impact of attributes in the life cycle, and used the method of combining AHP and fuzzy clustering to classify the wrecker. Tseng et al. considered the connection parameters (direction or type, etc.) between products and regarded it as the standard for module division. Umeda et al. integrated the life cycle attributes of product maintainability, reusability, and recyclability, and proposed a life cycle-oriented modular design method. Yu et al. introduced relevant attributes in the life cycle based on the functional attributes of the product, established a clustering function model by analyzing the relationship between parts, and finally divided the product into modules using a group genetic algorithm. Ji et al. [6] proposed a effectiveness-driven module division method for the product life cycle, considering the attributes of design, manufacturing, assembly and use in the life cycle, and evaluated the module granularity problem in module division through effectiveness. The above literature considers the impact of each stage in the product life cycle on the division of modules. With the increase of people’s environmental awareness, green design has emerged. In the entire life cycle of the product, considering the resource consumption of the product and the impact on the environment, green design can penetrate into all stages of the product life cycle, but due to the current technical level, green design often only considers a few major environmental factors. 2.3
Module Division Method for Green Design
As people’s awareness of environmental protection increases, the environmental friendliness of products has become more and more important, and modular design is one of the ways to solve this problem. The module division method oriented to the attributes of green design emphasizes the introduction of green design ideas during
A Review of Research on Module Division Methods
123
module division. Green design has attracted the attention of scholars for the entire life cycle of the product. Guo Wei et al. in order to effectively introduce green attributes into the module division, the product-oriented life cycle proposed 8 division criteria to guide the module division. Deng Tili et al. [7] established a product green division model based on customer needs, and used the modularity between the modules as the fitness function, finally used genetic algorithms to solve a better division method. Liu Dianting et al. [8] proposed a solution to the ant colony algorithm based on the established green product model. While considering the functional and institutional relevance of product parts, it also introduces the green attributes of the parts themselves. Wu Yongming et al. proposed a dynamic module planning method for product family based on the division of product family functional modules and considering the dynamic factors of each stage in the product life cycle. Chen Xiaobin aimed at the relationship between product function structure and environment in the process of module division, constructed a product green information model, and proposed a solution algorithm for green module division based on genetic algorithm. Tseng et al. put forward various criteria for the strength of connections between product parts, analyzed the green properties of materials, and combined with Group Genetic Algorithm (GGA) to propose a design method for product green modules. Smith et al. considered the recycling and disassembly stages in the product life cycle, combined the properties of the parts themselves and used atomic theory to divide the product into green modules. Yang et al., in order to avoid the constraint of function and structure in remanufacturing, proposed an ecological design method of module division for product life cycle, which has certain guiding significance for product green module division. In order to balance the environmental impact of product development and design, Chang et al. proposed a module division method considering green attributes. Ji et al. considered the green attributes of the product materials, established an optimization model of the master-slave structure, and proposed a module division method for green attributes. Yamada et al. [9] considered the recycling performance of products in the modular design stage to reduce waste emissions. It can be seen from the above-mentioned literature that the green modular design penetrates all aspects of the product life cycle, but it is mainly based on the two basic principles of improving resource utilization and reducing waste emissions. Scholars also divide the criteria for green modules. There is diversity, and further research is needed on how to establish a basic and unified green as the criterion in the future. 2.4
Module Division Methods in Other Aspects
Chen Yanhui et al. proposed a module division method based on the product bill of materials (BOM) based on the analysis of the product bill of materials (BOM). It obtains the structure tree (PST) according to the product bill of materials, and analyzes the nodes in the structure tree to obtain the product module division plan. Cheng Xianfu et al. proposed a module division method combining density algorithm and DSM matrix on the basis of considering the asymmetry of the design structure, introducing Euclidean distance to express the degree of association between parts, and avoiding the influence of matrix asymmetry. Jiao Jianqiang [10] introduced the concept
124
Y. Xiao et al.
of biological gene transcription and splicing, and proposed a product modularization method based on the ecological regulation network, which realized the dynamic closedloop regulation of product modularization. Zhang Fuying et al. [11] combined the concept of functional flow with the product life cycle and proposed a module division method for sustainable design. Optimize the module division scheme from the perspective of sustainable design to obtain the final division scheme. Stone et al. proposed the division criteria based on three flows (material flow, energy flow, and information flow), based on the evaluation criteria of different flows, according to customer needs and functions. The conversion divides the product into modules. Sheng et al. proposed a module division method by calculating the correlation between any two sub-functions of the product. Yang et al. considered the product configuration problem, and obtained the partitioning scheme by clustering the established matrix through the fuzzy mean algorithm (FCM). Sheng et al. [12] studied the module division oriented to the configuration process, by establishing a directed graph to describe the service activity relationship, and then using the reachability matrix to divide the service activity.
3 Analysis and Discussion Modular design is an effective method to achieve mass production and personalized design of products, and is the driving force of product innovation. Module division is the core part of modular design, and it is also a hot issue that scholars at home and abroad have been studying. Based on the related literature on the module division of mechanical products, this article provides a systematic overview of the research on the module division of mechanical products at home and abroad, and summarizes the meaning of module division, from the aspects of product functional structure, life cycle stages, and green design. They are reviewed separately. The literature involved in this article is sorted out in Table 1. Judging from the above literature review, most of the existing classification methods use bottom-up cluster analysis methods, which are difficult to apply to complex mechanical products and will cause complex calculations. In order to adapt to changing market demands, it is necessary to establish a dynamic closed-loop system with modular design. In order to reduce the coupling degree between the divided modules, it is also necessary to analyze the coupling degree. Based on the above summary, the following prospects are made: (1) Most of the existing module division methods are aimed at products currently in service. In the future, module division will be introduced in the product design stage to maximize the independence of modules. (2) The existing modular design methods have certain problems in response to the market. In the future, a complete dynamic closed-loop system of module division should be established, which can respond well to the needs of the market and customers.
A Review of Research on Module Division Methods
125
Table 1. Document classification of mechanical product modules Modular design of mechanical products Functional structure division
Life cycle-oriented division
Division for green design
Other
Researcher Jiang Hui et al. (1999); Gong Zhibing et al. (2007); Liu Jiangang et al. (2006) Man et al. (2010); Cheng et al. (2010); Umeda et al. (1996); Dahmusj et al. (2001) Tang Tao et al. (2003); Guo Wei et al. (2010); Su Ming et al. (2010); Chen Bing et al. (2018) Tseng et al. (2004); UMEDA et al. (2008); YU et al. (2011); JI et al. (2012) Deng Tili et al. (2018); Dian Ting et al. (2014); Wu Yongming et al. (2013); Chen Xiaobin (2012) Smith et al. (2010); Chang et al. (2013); YANG et al. (2011); JI et al. (2013); Yamada et al. (2018) Chen Yanhui et al. (2012); Cheng Xianfu et al. (2019); Jiao Jianqiang (2019); Zhang Fuying et al. (2017) Stone et al. (2000); Sheng et al. (2012); Yang et al. (2011); Lei et al. (2010); Sheng et al. (2017)
(3) The existing module division method lacks the analysis of the coupling degree between the modules after division. In the future, the coupling degree between the modules and the degree of aggregation within the modules should be studied to obtain better solutions. (4) With the development of a new generation of information technology, modular design is also developing towards intelligence. It is possible to mine the potential needs of the market and customers based on big data and obtain more useful information. In the future, the choice of the weight of the association relationship can also be selected intelligently to avoid the influence of human subjective factors.
References 1. Yanhui, C., Yihua, H.: Multi-level module division method of electromechanical products. Mach. Des. Manufact. 10, 132–134 (2012) 2. Shisheng, Z., Huixia, W.: Research on module division method based on QFD and axiomatic design. Mach. Des. Manufact. 01, 98–100 (2013) 3. Oman, M.: Structural optimization of product families subjected to multiple crash load cases. Struct. Multi. Optim. 41(5), 797–815 (2010) 4. Yiming, S.: Research on modular design of mechanical products. Intern. Combust. Engines Accessories 01, 204–205 (2019). (in Chinese) 5. Bing, C., Chang, L., Wei, M., et al.: Research on module division technology of wrecker truck based on analytic hierarchy process. Manufact. Technol. Mach. Tool 4, 163–170 (2018). (in Chinese)
126
Y. Xiao et al.
6. Yangjian, J., Guoning, Q., et al.: Modular design involving effectiveness of multiple phases for product life cycle. Int. J. Adv. Manufact. Technol. 66(9–12), 1475–1488 (2012) 7. Tili, D., Zhijun, R., Yunfei, C.: Research on product green module division method based on customer demand. Modular Mach. Tool Autom. Process. Technol. 537(11), 141–144+149 (2018) 8. Dianting, L., Haoping, H.: An ant colony algorithm for solving green module division. Manufact. Autom. 36(19), 66–69+78 (2014). (in Chinese) 9. Yamada, T., Hasegawa, S., Kinoshita, Y., et al.: Process integration concept for waste reduction among manufacturing planning, modularization and validation. Procedia Manufact. 21, 337–344 (2018) 10. Jianqiang, J.: Research on product module division method based on ecological regulation network. Zhengzhou University of Light Industry (2019). (in Chinese) 11. Fuying, Z., Jingying, D., Nana, S., et al.: Modularization method for product sustainable design. Packag. Eng. 38(19), 142–147 (2017). (in Chinese) 12. Zhongqi, S., Changsai, L., Junyou, S.: Module division and configuration modeling of CNC product–service system. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. (2017)
Application of 3D Animation Technology in the Making of MOOC Chulei Zhang(&) School of Media and Communication, College of Humanities and Science of Northeast Normal University, Changchun 130117, China [email protected]
Abstract. Information technology is gradually developing and progressing with the entire society. At the same time, it is also an indispensable part of the education industry. With the advancement of science and technology, multimedia teaching has become common in modern education. While enhancing students’ interest in learning, it creates a broader teaching space. Among them, 3D animation technology is also a cutting-edge technology in the development of information technology. The introduction of animation technology into the classroom plays a very important auxiliary role in improving the quality of teaching, and also greatly improves students’ interest in learning and learning efficiency. The application of 3D animation technology in the current MOOC production can greatly improve the current teaching ability. 3D animation technology can break the limitations of time and space, and make the MOOC production classroom more realistic. In this regard, the purpose of this article is to study the application of 3D animation technology in MOOC production. This article uses a questionnaire survey method to design an experiment with 200 students, and applies the case design to educational practice. The experimental conclusions show that 3D animation technology is very important for MOOC production. In the process of MOOC production, 3D animation technology can be beneficially used to improve teaching methods and classroom efficiency. Keywords: Digital animation 3D animation technology Mooc production Teaching innovation
1 Introduction The advancement of technology has brought people into the information age. In the context of the new era, the status of computer information technology is becoming more and more prominent. Only by expanding the application scope of computing information technology can we contribute to economic development and social progress [1, 2]. Our country’s teaching system is constantly changing, and higher requirements are put forward on teaching methods. Computer information technology is a combined technology. Applying it to courseware production can update teaching methods and improve teaching methods [3]. The good development prospects of the animation industry have prompted many universities to open related majors. Today, it is very attractive to young people. Many young people have joined this industry [4–6]. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 127–133, 2021. https://doi.org/10.1007/978-981-33-4572-0_19
128
C. Zhang
3D animation technology integrates the virtual environment with the real environment, enabling students to get an immersive real experience. If you want to effectively use 3D animation technology in modern MOOC production, you need to update your own teaching concepts and methods. The so-called 3D animation technology is actually a comprehensive technology that combines the advantages of various technologies such as simulation and multimedia. A very real virtual environment can be created through 3D technology, and users need to wear related equipment to get a real and vivid experience [7, 8]. The application of 3D animation technology in MOOC production can enhance students’ personal experience. Not only that, the students’ understanding of animation professional knowledge will be more intuitive and profound [9]. In-depth exploration of relevant teaching resources, perfect teaching methods, and promote 3D animation technology, the organic combination of teaching methods and teaching resources, thereby greatly improving the teaching quality of MOOC [10]. The application of 3D animation technology in education in my country is relatively late and is still being explored, but its development momentum is very rapid, and it has been continuously invested in the field of education [11]. Currently, online teaching is realized through 3D animation technology and Internet technology. MOOC not only shows the teacher’s personal style, but also one of the best ways for students to learn advanced cultural knowledge on the Internet. The use of 3D animation technology can play an active role in the learning process of students, and the improvement of students’ learning enthusiasm is inseparable from the innovation, diversification and vivid presentation of teachers’ teaching content and methods. 3D animation technology can not only reproduce the classroom intuitively and vividly, but also can use post-editing and special effects technology to insert rich audio-visual materials, which will help attract students’ attention and mobilize learning enthusiasm. In this regard, this article combines the actual situation of the students, compares learners’ satisfaction with the teaching effects of different teaching methods, and puts forward suggestions for use. The use of 3D animation technology in MOOC production can stimulate students’ interest in actively participating in information technology activities and actively exploring the mysteries of information technology. Can enable students to develop independent thinking and creative tasks in the completion of the learning process.
2 3D Animation Technology Related Basic Methods 2.1
Digital Animation Concept
Digital animation is also called computer animation. It refers to a series of still images made by using graphics and image processing technology and computer animation software as tools. Using the principle of visual persistence, it plays continuously at approximately 24 frames per second. Playing these frames can form an animation effect of object movement, that is, the animation effect is composed of countless still frames. Due to the development of computer graphics, digital animation technology has entered a new era. Modern digital animation is a contemporary cutting-edge art created through
Application of 3D Animation Technology
129
artistic techniques and high-tech methods. Compared with traditional animation technology, digital animation technology adds more dynamic effects. Computer animation breaks through the limitations of time, space, location, and conditions. It can directly focus, simplify, generalize, exaggerate and directly use complex scientific principles, abstract scientific concepts, hidden internal structures and traceless tracks. Anthropomorphism and other technologies are used to visualize, visualize and visualize content that is difficult for the camera to capture, and to create images that can accurately explain the problem, bring a good visual experience, and cause problems in the entire animation industry. Small technological revolution. 2.2
3D Animation Modeling
3D animation technology is a new technology that can create models and scenes according to the shape and size of objects to be represented in the computer virtual three-dimensional world, and then set the motion trajectory of the model and the motion of the virtual camera to set animation parameters as required. 3D modeling is divided into two modes: NURBS and polygon mesh. Polygonal mesh modeling is based on ramen, which is more suitable for the application of rendering and complex scene animation in film and television animation production, advertising design and production. The current 3D models are mainly constructed using 3D animation software such as dsMax and Maya, including various buildings, characters, vegetation, machinery, etc. that appear in film and television animation. Computer virtual simulation technology has its effect in many aspects. When making 3D animation, we must first create a 3D model. The 3D model includes the generation of virtual scenes, 3D animated characters and action performances. When performing 3D modeling, it is necessary to complete the establishment of related models through the collaboration of virtual simulation technology and interactive technology, and then introduce some 3D animated characters to reflect the content that needs to be displayed. The difficulty of virtual simulation technology is also that designers need to fully prepare materials during the modeling process, sort out related materials for virtual scenes and virtual characters, and prepare 3D drawings to ensure the normal establishment of 3D animation. Therefore, in order to quickly complete the application of virtual simulation technology in 3D animation, the modeling speed must be accelerated to improve work efficiency.
3 Experimental Method of 3D Animation Technology Application in MOOC In order to improve students’ information literacy and operational skills, this article uses a hierarchical task-driven approach in the information curriculum. Because students have different majors, they will be divided into tasks of different difficulty when assigning tasks. Students try to complete them independently and discuss in groups, while the teacher assists in completing the tasks in this lesson. This article takes mechanical course teaching as an example, compares the teaching effect of layered task-driven method and teaching demonstration method, and lays a foundation for
130
C. Zhang
improving teaching efficiency. When selecting materials, we carefully studied the relevant development status at home and abroad and the application status of 3D animation technology in secondary vocational information technology classroom teaching. By summarizing ideas that can be used for reference and seeing their shortcomings, it provides directions that require effort. Through the actual distribution of the questionnaire, collect learners’ satisfaction with classroom teaching and their understanding of teaching methods, conduct questionnaire analysis and summary, and provide a basis for effective experimental design. The subject of the questionnaire is 200 students. These include six majors including computer and digital product maintenance, customer communication and service, automobile maintenance, railroads, electrical and numerical control, with 140 boys and 60 girls respectively. The survey takes the form of actual paper questionnaires. A total of 195 valid questionnaires were collected, with an effective rate of 97.5%. Through the investigation, some information about the teaching effect of different teaching methods was obtained.
4 Experimental Conclusions on the Application of 3D Animation Technology in MOOC Production Through questionnaire survey and statistical analysis, the survey results show that 54.87% of students are very interested in the MOOC teaching method of 3D animation technology. Among the teaching methods that most arouse students’ interest in learning, task-driven methods accounted for 21.49%, case-based teaching methods accounted for 14.62%, and other teaching methods accounted for 9.02%, as shown in Fig. 1.
MOOC Teaching of 3D Animation Technology Task-driven approach
Case Teaching Method Other teaching methods Fig. 1. Teaching methods that most arouse interest in learning
High-quality MOOC courses are inseparable from carefully crafted MOOC teaching courses. Before making MOOC courses, course producers need to conduct an in-depth analysis based on their professional guidance and clarify the topics they wish to show and teach in this course. The design of the course is mainly to design the
Application of 3D Animation Technology
131
content and teaching methods of the course, and plan the core content and expression methods of each link in the course teaching process in detail to lay the foundation for forming a systematic and scientific MOOC course. The color of the traditional MOOC courseware for mechanical courses is usually gray, and each component is also monotonous gray. In this way, the discrimination of students in the process of observation and learning will be reduced, and the ideal teaching level will not be reached. When making modifications, it is necessary to assign different colors to the rendering materials of the 3D animation to distinguish the parts and improve the recognition of each part. However, due to the stable learning environment of mechanical subjects, when choosing colors, you should try to choose calm and atmospheric colors, and modify the background appropriately to highlight the main mechanical mechanism. The purpose of the split script is to visualize the text described in the script and edit the concept and blueprint of the program. J Do not categorize the classification of sports footage, screen content, duration, etc. According to the performance requirements of the mechanical article principle, design the 3D animation sub-shot script of the mechanical article principle, as shown in Table 1. Table 1. Sub-shot script of mechanical 3D animation 1
Close shot
Camera is still
2
Close shot close up Vision
Pan
3 4
5
Close shot
Rotate the lens Push lens and rotate lens Shift lens
In the picture, the picture of the whole mechanical article is displayed as a whole, and then the mechanism of the mechanical article is explained in three parts Explanation of the lower shaft of mechanical articles
5s
The mechanical objects rotate 360°, showing images that the camera can’t usually take The lens gradually moves away and rotates again
4s
Give text notes to mechanical parts and finally return to the original screen
18 s
2s
3s
After further investigation, the understanding of MOOC production of 3D animation technology is 26.15%, the understanding of task-driven teaching methods is 46.67%, and the understanding of task-driven teaching methods is 25.64%. Among the factors affecting their own learning interest, teachers’ teaching methods accounted for 30.26%, students’ physical and mental state accounted for 34.49%, interest in subjects only accounted for 27.44%, and classroom learning atmosphere 7.81%. In the survey on which factors the learning efficiency depends on, the results show that 66.67% of the learning efficiency depends on their own learning status. And 30.26% of students depend on the teacher’s teaching method, as shown in Fig. 2. Among the factors that affect learning interest and learning efficiency, students’ own learning state and physical and mental state are the key factors. Students can find the reason from themselves and realize their shortcomings. The second is the teacher’s
C. Zhang
Affect learning efficiency
132
Depends on the teacher's teaching method Their own learning status
Affect learning interest
Classroom learning atmosphere Subject of interest Student's physical and mental state Teacher's teaching method 0.00%
20.00%
40.00%
60.00%
80.00%
Fig. 2. Factors affecting learning interest and learning efficiency
teaching method. Therefore, in the teaching of modern education, as a teacher, you should grasp the above factors, use a variety of teaching methods, and continuously explore and innovate new teaching methods in teaching, which can better stimulate students’ interest in learning. Improve the quality of classroom teaching. The subjectivization of thinking images, the visualization of microscopic phenomena, the visualization of abstract concepts, and the visualization of abstractions can all be expressed through computer animation. For example, in mechanical disciplines, some principles are complex, abstract, and difficult to express in actual objects, and most of the operating principles of mechanical mechanisms are within the entire machine and are limited by angle and space. These principles are not only invisible to the naked eye, but also incapable of shooting with a camera. These are the more difficult points in the teaching of mechanical principles. When teaching, we cannot use entities for teaching, words and pictures are difficult to describe. Using the special medium of 3D animation, the whole vivid process can solve these problems well. With the help of 3D animation, it can be described vividly through 3D animation simulation and demonstration, close-up and close-up, close-up and slow-motion playback. The content of the principles expressed in the courseware and the operational status of the organization. In order to give animated characters personality and vitality, we can also apply anthropomorphism, exaggeration, deformation and other performance techniques to monotonous MOOC courseware, which can make inanimate machines alive or vigorous. 3D animation technology has powerful functions. It makes the production of MOOC more vivid through exaggeration and deformation. At the same time, it also has the function of vivid and realistic materials, which can construct lighting effects that are difficult to create by traditional animation and two-dimensional animation.
Application of 3D Animation Technology
133
5 Conclusion MOOC teaching is an important way to improve the current level of higher education. In practical applications, teachers and filmmakers must not only correctly understand the connotation and characteristics of MOOCs and ensure the rationality of the course theme, but also must start with preparation, content layout, recording and editing, etc., and combine theory with practice Combine. Produce high-quality MOOC courses. The MOOC production of 3D animation technology is increasingly used in the classroom teaching process. It has become the preferred software for some teachers to make courseware, and some teachers insert 3D animation materials into the courseware, making 3D animation an important part of classroom teaching. The animation is very beautiful, the courseware is small, it is vivid, interactive, and the files are small, which is good for network communication. Wait, there are many teachers who make courseware are joining the ranks of 3D animation. In the future, I believe that more teachers will use 3D animation to make courseware and experience the fun of this unparalleled animation. Acknowledgements. This research is supported by the Social Science Foundation of Education Department of Jilin Province (Grants No. JJKH20181315SK).
References 1. Erwen, Z., Wenming, Z.: Construction and application of MOOC-based college english micro lesson system. Int. J. Emerg. Learn. 12(02) (2017) 2. Diao, J., Xu, C., Jia, A., Liu, Y.: Virtual reality and simulation technology application in 3D urban landscape environment design. Boletin Tecnico 55(4), 72–79 (2017) 3. Teng, M.: Research on the application of computer digital technology in 2D animation design. Revista de la Facultad de Ingenieria 32(5), 714–722 (2017) 4. Wang, Y., Wang, D., Zhang, X., Pang, W., Miao, C., Tan, A.H., Zhou, Y.: McDPC: multicenter density peak clustering. Neural Comput. Appl. 1–14 (2020) 5. Qin, Z., Tao, Z.: Construction of SOA based VR technology in animation teaching. Int. J. Emerg. Technol. Learn. (iJET) 13(5), 153 (2018) 6. Zhang, N.: Development and application of an english network teaching system based on MOOC. Int. J. Emerg. Technol Learn. 13(7), 149 (2018) 7. Pang, Y., Jin, Y., Zhang, Y., Zhu, T.: Collaborative filtering recommendation for MOOC application. Comput. Appl. Eng. Educ. 25(1), 120–128 (2017) 8. Parmar, M., Wang, D., Zhang, X., Tan, A.H., Miao, C., Zhou, Y.: REDPC: a residual errorbased density peak clustering algorithm. Neurocomputing 348, 82–96 (2019) 9. Guo, Z.: Application of computer 3D animation technology in construction industry. J. Phys. Conf. Ser. 1574, 012088 (2020) 10. Tian, X., Pang, W., Wang, Y., Guo, K., Zhou, Y.: LatinPSO: an algorithm for simultaneously inferring structure and parameters of ordinary differential equations models. Biosystems 182, 8–16 (2019) 11. Zhang, L.: Application research of automatic generation technology for 3D animation based on UE4 engine in marine animation. J. Coastal Res. 93(sp1), 652 (2019)
Foreseeing the Subversive Influence of Intelligent Simulation Technology for Battle Example Teaching Nan Wang(&) and Miao Shen Department of Warship Command, Dalian Naval Academy, Dalian, Liaoning, China [email protected]
Abstract. It is an important research project that exploring battle example teaching is how to serve the fight and drill preferably. The simulation territory has introduced artificial intelligence, virtual reality and cloud computing at present, the simulation based on these techniques will bring far-reaching influence for battle example teaching. The intelligent simulation technology will remodel analysis factors of battle example, reconstitute research idea of battle example, overturn the research of battle example. The battle example teaching methods based on intelligence confrontation, scene recurrence and fight chess manoeuvre will show itself, and it will help researchers capture victory inspiration from battle example, feel command art in virtual confrontation and excavate defeating mechanism from retrospect research. Keywords: Intelligent simulation technology Application
Battle example teaching
1 Introduction The past battle examples may not be the same as modern warfare in terms of equipment, but their operational concepts are still unfading, which still have important implications for modern warfare. The teaching of battle examples mainly uses words, images, illustrations and sand tables as the traditional research methods. The researchers have gathered their painstaking efforts and wisdom to summarize many classic battle examples at all times and in all countries. The summary in words has always played an important role in fight and drill. Researchers should also see that many kinds of intelligent simulation technologies are profoundly changing people’s life and work, and also have a profound impact on the field of military research. Simulation itself is not only a process of knowledge processing, but also a process of analysis and research, modern simulation technology has not been widely used in battle example teaching (BET), as a general supporting technology, simulation technology can help solve some difficult problems in battle example teaching, and generate some key insights and innovative ideas, which can not be replaced by other methods. Case study (CS) should seize the opportunity of new technology and get high quality development,
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 134–141, 2021. https://doi.org/10.1007/978-981-33-4572-0_20
Foreseeing the Subversive Influence of Intelligent Simulation Technology
135
otherwise it is easy to be marginalized. Foreseeing the impact of intelligent simulation technology on case study will inject inexhaustible power into the follow-up case study.
2 Intelligent Simulation Technology Will Reshape the Elements of Battle Example Teaching Engels said, “As soon as technological progress can be used for military purposes and has been used for military purposes, they will immediately, almost forcibly against the will of the commander, cause changes or even alter the way of operations [1]. When it comes to the Persian Gulf War, people will think of Operation Desert Storm and Warden’s Five Rings. When it comes to the Iraq war, people will think of Decapitation and Effect based combat theory. In the past, there was not a big difference in the interpretation and research perspectives of battle examples, the latecomers often just looked at the “back of the neck” of their predecessors and regarded the results of previous studies as the final conclusion, which was obviously not conducive to the development of battle example studies. Therefore, modern battle example teaching needs new perspectives and new technologies to explore new enlightenment. Battle example teaching should not only stand on the “shoulder” of the predecessors, but also should not stick to the “habitual interpretation”. On the one hand, it should be “Great minds think alike”, but on the other can not be trapped by the “conclusion”. 2.1
AI Enabled Battle Example Teaching Will Provide a Strong Opponent
Many researches in modern military field are based on artificial intelligence service platform [2]. Artificial intelligence algorithm which can exchange data with each other provides favorable conditions for battle example teaching. For example, “AlphaGo”, the first one to beat human professional go players, is an important symbol of artificial intelligence in the new era. Its working principle is to conduct deep learning through multi-layer artificial neural networks. It is through deep learning that “AlphaGo” finally makes all industries realize the extraordinary ability of artificial intelligence. The cases of AI won in chess and games indicates that AI can be applied to research on battle example teaching. It is not important to realize how strong the artificial intelligence is, but important to combine the artificial intelligence with the research on battle example teaching closely and switch the theoretical analysis thinking to the artificial intelligence thinking. The application of artificial intelligence to battle example teaching will provide more updated analysis elements, expand a broader research space, and open up a high-level opponent scene to meet strong challenges in the recurring battlefield. By building intelligent opponents, we can analyze the essence and characteristics of battle examples more deeply, focus on the key problems to be solved, and criticize the past war experience, so as to establish new concepts, explore new ideas, and form new thinking.
136
2.2
N. Wang and M. Shen
The Integration of Virtual Reality with Battle Example Teaching Will Provide an Unprecedented Historical Scene
Virtual reality technology is an important branch of modern simulation technology, which provides people with unprecedented visual shock. At present, most of the scenes made by virtual reality technology are based on the models of people, objects, environment and their interaction [3]. These simulation models reflect the essence of things and are presented to people through computer external equipment, which greatly reshapes the interaction between human and simulation scenes. The application of virtual reality technology to battle example teaching is to provide a virtual battlefield that can produce interactive functions and a construction environment that can study details. The advantage of virtual reality technology in displaying interactive battlefield scene is that it integrates computer graphics technology, sensing technology, threedimensional display technology and other subject technologies. Through the integration of these technologies, it can project realistic historical battlefield scenes, so as to provide a research environment for researchers to immerse, interact and conceive. In this virtual interactive battlefield, the content and form of case teaching will be more abundant, the feeling of combat action at that time will be more direct, and the understanding of combat commander decision-making will be more profound. 2.3
Cloud Computing Supports Battle Example Teaching Will Provide a Precise and Comprehensive Data Support
Different from the previous simulation technology, cloud computing emphasizes the network service of simulation technology. Users can find professional information technology solutions in network services according to their needs. The data processing method of battle example teaching also needs to keep up with the pace of information development. The backward processing method will not only hinder the development of case study, but also have adverse effects on the application of case results in fight and drill. In the background of big data era, with the rapid development of data processing technology, cloud computing will profoundly affect the data analysis and processing mode in battle example teaching. At present, cloud platforms have sprung up in many industries, such as “Y-English” and “cloud medical treatment”. Of course, “cloud” also has space for application in battle example teaching. Battle example researchers can run the battle examples written in the “cloud” or establish data analysis services based on “cloud”. The cloud platform will integrate the previous case database construction, and concentrate the data of personnel, weapon equipment performance, action points, battlefield environment and other data in the cloud, so that researchers do not have to repeat data processing, just concentrate on using cloud data for in-depth research. Therefore, the battle example teaching supported by cloud computing has higher efficiency and greater application value, which will provide a precise and comprehensive data support for analysis.
Foreseeing the Subversive Influence of Intelligent Simulation Technology
137
3 The Intelligent Simulation Technology Will Reconstruct the Battle Example Teaching The development of simulation technology is changing with each passing day, but the battle example teaching has not entered the advanced stage of simulation system development and application. The more scientific the concept and the more reasonable the method is, the more valuable the case study will be. At present, the content of case study is mainly manifested in on-the-spot discussion, text analysis, etc., the research focus is not enough; in the form, it is mainly based on the planar single media, and the research is not attractive enough; in the experiment, it lacks of complete and effective technical support, and the actual benefits are not fully played. Therefore, the purpose of introducing intelligent simulation technology is to solve the problems of single perspective, lack of methods, insufficient dimensions and difficult to experience. 3.1
The Concept of Battle Example Teaching Has Changed from “Cognitive Analysis” to “Precise Analysis”
“Cognitive analysis” refers to the understanding and mastery of the facts and processes of cases, including the basic facts and objective situations such as the causes, processes and conclusions of cases. However, the purpose of battle example teaching should not be limited to the cognition of case facts, but also to draw valuable enlightenment for current fight and drill. According to this, the case study also needs to make a detailed investigation of each link in the case, otherwise, the potential high value of the case can not be found out, and the conclusion is difficult to be convincing. Therefore, the concept of battle example teaching should be changed from “cognitive analysis” to “precise analysis”. Through precise analysis, the internal and external causality of cases can be deeply analyzed, and the art of decision-making, operational law and command strategy of the case itself can be seen. This is just like observing cells with a microscope to observe their “cytoplasm” and “nucleus”. Cases may contain many factors that are helpful to future wars. Only by means of “microscopic analysis” can we dig out the essence, so as to widen a new world. It is necessary to construct the international political, economic and diplomatic environment for precise analysis of cases by using modern simulation technology, and to analyze various factors such as forces, combat actions, operational guidance and battlefield environment by using data. Precise analysis should not only focus on the basic characteristics and laws of the cases, but also pay attention to the specific details of the cases; not only should we consider how the weak side can play its advantages, but also how to attack the weakness of the strong enemy. 3.2
The Research Method of Cases Has Evolved from “Written Discussion” to “Experimental Platform”
The “written discussion” in the battle example teaching refers to summarizing the typical combat theories, successful experiences and lessons of failure in the case study in written form. “Written Discussion” brightens the “eyes” of battle example teaching
138
N. Wang and M. Shen
and makes researchers see more clearly and farther. However, a single written discussion can not let the later generations really feel the continuous development of the case plots, and it is difficult to experience the interaction of combat actions in person. But, the “experimental platform” can do it. In order to carry out the battle example teaching on the experimental platform, the first step is to design the combat cases, select the experimental verification points, set up the required experimental conditions, and then analyze and obtain the experimental results by establishing and running the simulation model on the experimental platform [4]. The experimental platform extends the “hands” of battle example teaching, which makes researchers feel deeper and experience more thoroughly. The “Written Discussion” of the battle example teaching mainly summarizes the cause, background, process and outcome of the case, and finally draws enlightenment, which is a basic analysis process [5]. If the intelligent simulation technology is used to carry out the battle example teaching based on the experimental platform, the coherent process of “observation-judgment decision-making actionreflection” can be carried out so that the researchers can practice on the experimental platform. The evolution of battle example teaching from “Written Discussion” to “experimental platform” is based on simulation technology, setting battlefield background, standardizing command process and opening thinking strategy. Therefore, the construction of battle example experimental platform is the logical starting point for simulation technology to influence future battle example teaching.
4 Intelligent Simulation Technology Will Subvert the Battle Example Teaching Method The deep study of battle examples will give birth to the innovation of combat methods. The application of intelligent simulation technology to battle example analysis is often accompanied by the arrival of new research methods of battle example analysis. In connection with the general procedure of case study, there are three methods of case study based on intelligent simulation technology, which will play an important role in the future. 4.1
Battle Example Teaching Based on Intelligent Countermeasure
The battle example teaching based on intelligent countermeasure (IC) is to use agent to replace entity in the research process. By observing agent confrontation or confrontation with intelligent opponents, we can directly understand how commanders create and capture fighters, how to obtain battlefield initiative, how to realize the transformation of battlefield situation under inferior conditions, and finally achieve the goal of winning. Artificial intelligence can become a strong opponent of human beings in the game, and can also be used as one or more sides of intelligence in battle example teaching [6]. In the simulation case, the intelligent opponent can learn directly from the input and experience, and confront the human according to the established procedures or rules. Battle example teaching with intelligent countermeasures can be “human-machine (HM)” or “machine-machine (M-M)” [7]. In the “human-machine” mode, the case analyst
Foreseeing the Subversive Influence of Intelligent Simulation Technology
139
is one side of the case, and artificial intelligence is the other. The “human-machine” mode separates the deduction plot from the case process, so as to make full use of a certain plot in the case, expand the research space, and deeply excavate the potential value of the case through confrontation with intelligent opponents. The “machinemachine” mode is to set up intelligent simulation models for both sides or parties in a battle example, and integrate data, rules and entity models into a research platform for operation and systematic research. The “machine-machine” mode is mainly aimed at the overall problems or large scenes of the war, highly integrates the simulation resources needed in the battle examples, uses the visual operation interface and computer statistical analysis tools to complete the call, operation and analysis of the case data, so as to draw a macroscopic experimental conclusion. 4.2
Battle Example Teaching Based on Scenario Reproduction
Battle example teaching based on scenario reproduction (SR) is a new thing combining visual technology, especially virtual reality technology and battle example simulation experiment. It is a kind of battle example simulation research method with strong information characteristics. The history can be repeated in the “situation”, and the process can be experienced in the “recurrence”, which will certainly have a positive and beneficial reference for the exploration of new warfare methods and the development of new weapons and equipment. At the same time, the battle example teaching based on scenario reproduction can also be used in military training, college teaching, weapon equipment demonstration, combat doctrine development and other applications. Scenario reproduction is a kind of simulation concept. Based on the battlefield scene in the battle example, the troops are organized to drill, and the troops are arranged in the virtual scene, so that the researchers can digest, absorb, integrate, inherit and innovate in the scene experience. Battle example teaching based on scenario reproduction will give full play to the advantages of virtual reality technology and simulation experiment technology, and give researchers a strong sense of visual impact. This method is not a simple repetition of the war case process, but through practice to deeply experience the combat process, and complete the case study in the cycle of “research and judgment of situation making decision - taking action - new situation”. The battle example teaching based on scenario reproduction can display all kinds of combat elements and give real visual experience. At the same time, it also provides a good human-computer interaction function, which is convenient to adjust various battle example elements. Each experimental element is displayed in the scene to enhance the immersion experience. 4.3
Battle Example Teaching Based on Big Data Deduction
Battle example teaching based on big data deduction (BDD) is a research process in which all opponents in the case use big data to deduce, make decision and command counter (CC) on military actions in simulated battlefield environment according to the rules of battle examples. As a bridge between case practice and case theory [8], big data is the key link to improve the effect of battle example teaching. The combination of battle example teaching and big data deduction can give full play to the practical
140
N. Wang and M. Shen
advantages of big data and make up for the lack of cognitive theory in battle example teaching [9]. This is an innovation in battle example teaching. Battle example teaching based on big data deduction includes electronic map simulating battlefield environment, units simulating combat forces, notes and deduction rules for simulating various actions or action results. Among them, the deduction rule is a summary of the experience of the past combat cases, which is derived from the longterm accumulation of the case study and experimental data [10], and contains a large number of operational theoretical knowledge, operational practice experience and combat law training. Simulation and reproduction of battle examples with big data deduction can highlight the dominant position of the case researcher. Researchers can conduct systematic research on the problems in the case from the perspective of decision makers, and experience the influence of various complex factors on decisionmaking at that time. At the same time, big data deduction has become a process of reexperiencing history and creating history, so as to achieve the purpose of deeply studying the tactics of war, summing up experiences and drawing lessons.
5 Conclusions The more scientific the research method and the more reasonable the practice mechanism, the more active people will be able to participate in the research. With the promotion of new technologies, the future battle example teaching will realize the shift of focus and the reorganization of methods, and its applicability and real-time performance will be significantly enhanced. Battle example teaching based on the intelligent simulation technology will build an interactive platform for personnel exchange, so that researchers can draw inspiration from the cases. In a word, the introduction of intelligent simulation technology into battle example teaching will certainly promote the renewal of the concept of war case study, and play a key role in further improving the quality of battle example teaching and giving full play to its benefits.
References 1. Engels, F.: Complete Works of Marx and Engels. People’s publishing house, Beijing, vol. 20, pp. 70–187 (1964). (in Chinese) 2. Lurch, S., Kopeck, D.: Artificial Intelligence. People’s Posts and Telecommunications Press, Beijing, pp. 14–35 (2018). (in Chinese) 3. Su, K., Zhao, S.: VR Virtual Reality and AR Augmented Reality. People’s Posts and Telecommunications Press, Beijing, pp. 48–85 (2017). (in Chinese) 4. Yang, S.: General Design of Warchess. China Machine Press, Beijing, pp. 47–68 (2018). (in Chinese) 5. Huang, C.: Strict Self-Evaluation: Wargame Derivation and Application. Aviation Industry Press, Beijing, pp. 12–47 (2015). (in Chinese) 6. Wu, M.: Intelligent Wars: AI Military Enjoyment. National Defense Industry Press, Beijing, pp. 11–49 (2020). (in Chinese) 7. Li, Q.: Artificial Intelligence and Industrial Change. Shanghai University of Finance and Economics Press, Shanghai, pp. 9–36 (2020) (in Chinese)
Foreseeing the Subversive Influence of Intelligent Simulation Technology
141
8. Wei, Q.: Future of Digital Finance: Graphic Big Data + Industrial Convergenc. Guizhou People’s Publishing House, Guizhou, pp. 12–55 (2018). (in Chinese) 9. Tu, Z.: Top of Data. CITIC Press, Beijing, pp. 12–56 (2019). (in Chinese) 10. Wang, W.: Principle and Practice of Cloud Computing. People's Posts and Telecommunications Press, Beijing, pp. 13–67 (2018). (in Chinese)
Construction of Smart Campus Under the Background of Big Data Kui Su, Shi Yan, and Xiao-li Wang(&) Mudanjiang Medical University, Mudanjiang 157011, China [email protected]
Abstract. Under the background of the continuous promotion of education informatization and the coming out of the 14th five year plan, the construction focus of university informatization is gradually shifting from building digital campus to building smart campus. The construction of smart campus has become one of the hot topics in campus construction. Therefore, this paper analyzes the connotation of smart campus and the basic structure of smart campus from the perspective of hierarchy concept, then discusses the specific construction of smart campus at different levels. Based on that, some practice of a university campus is discussed and a set of experiences are summed up for the construction of smart campus. Smart campus will be the inevitable trend of future. Keywords: Smart campus Big data and cloud computing Modern education
1 Introduction Education is the foundation of national progress [1]. In the era of rapid development of informatization, informatization education is the core of the development of colleges and universities, and the promotion of informatization education policy is the basic strategy to support the modernization of education [2]. Nowadays, with the rapid development of smart sensors, big data processing centers, and high-speed networks, building a modern smart campus and promoting the integration of smart teaching have become the trend of modern education in universities. In June 2018, the State Administration for Market Regulation and the National Standardization Administration of China published the national standard document “Overall Framework of Smart Campus” (GB/T36342–2018) [3]. Subsequently, in order to accelerate the pace of modernization of education, the Ministry of Education of my country clearly proposed the creation of a smart campus in the key points of work in 2019, and the deep integration of information and teaching to accelerate the education process in the information age [4]. The “China Education Modernization 2035” subsequently issued by the State Council also clearly proposed the unified construction of an integrated platform for intelligent education, service and management to accelerate the construction of smart campus [5]. Hence, this paper will discuss how to build a smart campus and what problems need to be faced in the construction of a smart campus.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 142–148, 2021. https://doi.org/10.1007/978-981-33-4572-0_21
Construction of Smart Campus Under the Background of Big Data
143
2 Concept and Connotation of Smart Campus Smart campus, as its name implies, is to add wisdom to the school. How to add wisdom to the school? What is needed is modern science, including smart sensors, big data and cloud computing centers, high-speed networks, etc. Different levels of technology in different eras can give different meanings to smart campus. In the 1980s and 1990s, computer technology in its infancy was introduced into some campuses for computer office and computer teaching, forming the embryonic form of smart campus. Compared with traditional methods, using computers can obviously improve the efficiency of office, management, teaching and other aspects. For example, each teacher can input the scores of students in his class into the computer through the Word-Star software under the DOS system and then store them in their own floppy disk. Finally, the management staff aggregated the floppy disks storing the scores of each class student into the computer, and then analyzed the scores of each class, and quickly obtained the minimum, maximum, mean and score distribution of each class. This kind of office method may seem outdated now, but more than 30 years ago, it can be considered that this way of using computers to deal with problems has become intelligent. Compared with today’s smart campus based on the big data Internet of Things, some scholars call the use of computers and the Internet for smart office and teaching as a digital campus or a digital campus [6–8]. It can be seen that with the development of technology and the progress of the times, the concept of smart campus has been given different meanings. It is currently believed that smart campus based on big data and the Internet of Things uses a variety of information technologies or innovative ideas such as big data and the Internet of Things to integrate the system and services of the campus to improve the efficiency of resource utilization and optimize campus management and services [8, 9]. And, this can improve the learning efficiency and quality of students [10]. Under the smart campus platform, smart devices will run through classrooms, laboratories, conference rooms, office spaces, playgrounds, dormitories and other corners in the form of the Internet of Things. It is connected to the big data service center through a wired network or mobile Internet, and unified management is carried out through a software application platform to realize the digitization and intelligence of the school’s teaching, scientific research and life office.
3 Framework of Smart Campus The overall goal of the current construction is to enable the school to achieve digital one-stop intelligent effects in teaching, learning, life, management, production, education and research. In order to divide the information exchange issues under each system into more manageable modules, and at the same time facilitate the interaction, expansion and understanding between different facilities, the architecture of the smart campus shown in Fig. 1 refers to the OSI/ISO Internet Open Protocol Middle level. Through the hierarchical division relationship, the construction of smart campus is divided into several layers, roughly including: infrastructure layer, supporting platform layer, application platform layer and application terminal layer.
144
K. Su et al.
Fig. 1. Overall structure of smart campus
Construction of Smart Campus Under the Background of Big Data
145
4 Construction and Implementation of Smart Campus 4.1
Infrastructure Layer
This layer is the foundation of campus construction, including the construction of campus infrastructure and the construction of hardware resources such as server storage and related physical security maintenance specifications. Specifically: 1. The construction plan of the basic campus environment and the transformation of infrastructure: This involves the renovation of the ceiling, the renovation of the wall, the installation of doors and windows, the installation of raised floors, the installation of partition walls and other decoration-related projects; the placement and installation of electrical wiring and various slots are related to electrical appliances and weak current Relevant projects; security access control, automatic alarm, intercom control, emergency exit, fire extinguishing system and other projects related to fire safety. 2. The introduction of smart electronic devices: including intelligent equipment such as display, patrol, sound reinforcement, intercom, lighting, broadcasting, air conditioning, all-in-one card, etc. The core of the smart device is the smart sensor, and the smart sensor inputs the acquired information to the data processing center or edge controller through the Internet of Things routing. 3. Hardware storage and server construction: including the construction of private databases, media databases, public management databases and other databases and the basic guarantee of storage server capacity. 4. Campus network construction and Internet of Things coverage: Campus network construction is divided into wired network construction and wireless network construction. The smart campus network has a large number of connected devices and a complex structure. Therefore, it needs not only high reliability and safety, but also redundancy and ease of disassembly and assembly. In the construction of the network, we adopted a three-level architecture connection mode of Gigabit network to node, 10 Gigabit network to building, and 100,000 Gigabit as the server core. Various devices on the smart campus can be interconnected through the network, and various data is automatically collected and uploaded to the big data platform of the smart campus through smart sensors, thereby achieving Internet of Things coverage. The network construction of smart campus requires no bottlenecks and requires the core switch to have sufficient bandwidth and efficient performance. At the same time, the entire network requires data exchange without bottlenecks, and core switches and aggregation switches must have powerful expansion functions. In addition, it should be able to keep enough room for expansion as demand changes to meet the needs of accurate, safe, reliable, and excellent exchange and transmission of information. 5. Security protection related to the operation of the Internet of Things and data centers: The security related to the data center includes the storage security of the cloud storage service center software (such as the management of passwords, system backups, etc.) and the security of storage servers. In order to prevent damage to the big data center caused by certain disasters or disaster events (such as long-term power outages or fires), a local disaster backup and disposal center needs to be built, which can be deployed to other campus building node computing centers or other security points. In order to prevent the data center server from being affected by a power outage, not only
146
K. Su et al.
a large-scale UPS is required as a backup battery, but also direct storage replication technology should be used to synchronize the storage data of the big data center and disaster backup and disposal center. Security related to the Internet of Things includes backup of network nodes, avoiding single points of failure, line protection, etc. Network lines and smart devices need necessary dust and water protection. If there is a link problem, the redundant link can forward the corresponding data to ensure uninterrupted data transmission. 4.2
Supporting Platform Layer
The supporting platform layer is the core layer that reflects the smart campus big data and its computing capabilities. It provides drivers and supports for various application services of smart campus. Since the storage system in the data has to undertake the complex applications of the university and provide various intelligent services for teachers and students, we adopt a modular layout for the big data center and make it highly flexible and expandable. The data center mainly includes data processing, data exchange, data service, as well as unified interface and support platform. Data processing includes data mining, data analysis, data fusion and data visualization. The data exchange unit is to expand existing applications on the basis of infrastructure layer databases and servers. It includes data storage, data aggregation and classification, data extraction and data promotion. Data service units include data security services, data report services, data sharing services, etc. The unified interface and supporting platform include various interfaces and modules for security, openness, portability and manageability, including unified identity authentication, authority, and various interface services. There are multiple business systems planned in the big data center, and each business system is independent and logically isolated. Each business system requires independent storage space. According to the traditional construction idea, each business system needs to be equipped with a set of independent storage devices. The structure of this scheme is relatively clear, and each business system is also independent of each other. However, it brings greater maintenance workload, greater construction costs, and is more likely to cause waste of storage resources, resulting in unbalanced resource allocation. With the continuous evolution and development of storage technology, storage area networks and centralized storage technologies are increasingly used in big data centers, which is also determined by the characteristics of the big data center's multi-service system. The data platform construction of the cloud computing center can also be realized by using storage area network architecture technology and centralized storage. 4.3
Application Platform Layer and Terminal Layer
The application platform layer is the highest layer of the smart campus, and its service is the concrete manifestation of campus intelligence. On the basis of the upper layer, the application platform layer builds the specific resources, environment, service and management applications of the smart campus. As shown in Fig. 2, the application platform layer provides a wide range of services for teachers, students and managers.
Construction of Smart Campus Under the Background of Big Data
147
The application platform layer includes the intelligent teaching environment and resources, and the efficient service and management of smart campus. The application terminal layer is the implementation interface of the platform layer. Service objects access shared platform services and various resources through various browsers and mobile terminals.
Fig. 2. Construction framework of Application layer and terminal layer
5 Conclusion The construction of smart campus is of key significance to comprehensively improving the level of informatization service and the sense of happiness and acquisition of teacher and student informatization services, and to promote the modernization of school governance system and governance capabilities. At present, the intelligent construction of campus is still in the exploratory stage, and there are still many problems to be further explored and many difficulties to be solved. For example, inadequate information resources, insufficient awareness of them, and lack of guarantees for continuous funding. The development of science and technology determines how smart the smart campus is. With the continuous advancement of technology, the smart campus will usher in a boom of vigorous development. Acknowledgements. This work was supported by IPIS2012. Heilongjiang Province Higher Education Teaching Reform Project: Research on the construction of smart campuses in local colleges and universities under the background of big data (Project No.: SJGY20180531); Heilongjiang Provincial Department of Education Basic Research Business Expense Project: Research on Application of Electronic Medical Record Data Mining Based on Association Rules (Project No.: 2018 -KYYWFMY-0096);
148
K. Su et al.
Fund Project: Heilongjiang Province Higher Education Teaching Reform Project: Research on the construction of smart campuses in local colleges and universities under the background of big data (Project No.: SJGY20180531); Heilongjiang Provincial Department of Education Basic Research Business Expense Project: Research on Application of Electronic Medical Record Data Mining Based on Association Rules (Project No.: 2018 -KYYWFMY-0096);
References 1. Pan, D., Ge, S., Tian, J., et al.: Research progress in the field of adsorption and catalytic degradation of sewage by hydrotalcite-derived materials. Chem. Rec. 20(4), 355–369 (2020) 2. Wang, H.M., Chen, J.: The present situation and future of hainan province with information technology promoting the balanced development of compulsory education. In: 2019 IEEE International Conference on Computer Science and Educational Informatization (CSEI), pp. 290–293. IEEE (2019) 3. Cheng, J.: Research on the informatization management innovation of college student status files under the background of smart campus. J. Jinling Inst. Technol. (Soc. Sci.) 33(03), 64– 67 (2019) (in Chinese) 4. Gao, J.: Smart campus - the future can be expected. Young People 05(05), 23 (2019). (in Chinese) 5. Liu, C.: Accelerate the modernization of education - start a new journey of building an education power - interpretation of “China education modernization 2035.” Educ. Res. 40 (11), 4–16 (2019) ((in Chinese)) 6. He, X., Lu, X.: Discussion on school teaching in the era of wisdom. Curriculum Textbook Teach. Method 40(02), 43–50 (2020) (in Chinese) 7. Tang, Z.: The concept and development of Smart Campus. School Adm. 121, 125–140 (2019) 8. O’Leary, D.E.: Big data’, the ‘internet of things’ and the ‘internet of signs. Intell. Syst. Account. Finan. Manag. 20(1), 53–65 (2013) 9. Dong, Z.Y., Zhang, Y., Yip, C., et al.: Smart campus: definition, framework, technologies, and services. IET Smart Cities 2(1), 43–54 (2020) 10. Hirsch, B., Ng, J.W.P.: Education beyond the cloud: anytime-anywhere learning in a smart campus environment. In: 2011 International Conference for Internet Technology and Secured Transactions, pp. 718–723. IEEE (2011)
Information Platform for Classroom Teaching Quality Evaluation and Monitoring Based on Artificial Intelligence Technology Guoyong Liu(&) Xi’an Fanyi University, Xi’an 710105, Shaanxi, China [email protected]
Abstract. The purpose of this paper is to study the construction and implementation of an information platform for classroom teaching quality(TQ) evaluation and monitoring based on artificial intelligence(AI) technology. In this paper, the classroom teaching of teachers is taken as the object of TQ evaluation and control for in-depth and detailed research, to find a more reasonable TQ evaluation method, so as to provide accurate feedback information for the improvement of TQ. In this paper, 18 teachers were taken as test objects, and a team of 5 experts made an objective evaluation of each teacher's TQ index at all levels, and the evaluation score was taken as the original data. Through the simulation experiment of the data source, the error of training the TQ evaluation index by using BP neural network is not more than 0.002. Keywords: Artificial Intelligence BP neural network evaluation Monitoring information platform
Classroom TQ
1 Introduction AI is the most cutting-edge subject developed based on the interpenetration of computer science, network network, information theory, neurophysiology, psychology, philosophy and linguistics [1, 2]. He mainly uses machines to simulate and realize human intelligent behaviors [3, 4]. People regard AI, atomic energy and space technology as the three most advanced technological achievements [5, 6]. Comprehensive universities are the basis for cultivating outstanding talents in various industries, and college students are the reserve army of national talents [7]. In this context, it is essential to improve the level of student management, accelerate the professionalization and specialization of student management, and build a comprehensive quality evaluation system for college students to guide and assist school management decisions [8]. Many universities have modified and gradually improved the methods of comprehensive evaluation for many times. The purpose of evaluation is to promote the promotion of quality education by exam-oriented education, and the specific methods have also changed greatly [9]. What used to be primarily qualitative has now been combined with qualitative and quantitative aspects, as well as qualitative and quantitative aspects of documentaries, reviews, questionnaires and related tests. The weight value of comprehensive evaluation has also changed, the quality of ability, © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 149–155, 2021. https://doi.org/10.1007/978-981-33-4572-0_22
150
G. Liu
humanistic quality, physical and mental quality has been improved, and the ratio of all aspects has changed greatly [10]. In this paper, the classroom teaching of teachers is taken as the object of TQ evaluation and control for in-depth and detailed research, to find a more reasonable TQ evaluation method, so as to provide accurate feedback information for the improvement of TQ. In this paper, 18 teachers were taken as test objects, and a team of 5 experts made an objective evaluation of each teacher's TQ index at all levels, and the evaluation score was taken as the original data. Through the simulation experiment of the data source, the error of training the TQ evaluation index by using BP neural network is not more than 0.002.
2 AI and TQ Evaluation 2.1
TQ Evaluation and Control
TQ control is to ensure the existence of quality fluctuations as the premise, according to the TQ management standards and relevant information acquired and collected, through comparative analysis and monitoring feedback, reasonably organize quality elements, timely correct the deviation, make the quality tends to improve in the fluctuation, and ensure the smooth realization of TQ objectives. 2.2
BP Neural Network
In fact, students' knowledge judgment of knowledge integral is the diagnosis of students' shortcomings of knowledge integral. The most effective method for diagnosing defects is the pure directional multilayer network. This network is called BP network because BP algorithm is used in its learning and training process. The least mean square error learning method is adopted, the error propagates through the input level in the learning stage, and the input vector propagates through the output level in the working stage. The network cascade of the network is interconnected, there is no connection between the same cascade of neurons, and there is no direct connection between the output layer and the input layer. The potential hierarchical nodes can be set freely as needed, which proves that the function of a three-tier cyclic network can be implemented at will. Let the connection weight from neuron j in the l layer to neuron i in the l + 1 layer be wlji , P is the current learning sample, olpi is the output of neurons in the l + 1 layer in the P sample, and the transformation function is Sigmoid function, i.e. f ðxÞ ¼
1 1 þ eðxÞ
ð1Þ
Information Platform for Classroom Teaching Quality Evaluation
151
For the P sample, the output error of the network is Ep : Ep ¼
n1 1X ðlÞ ðtpj oPj Þ2 2 i¼0
ð2Þ ðlÞ
Where, tpj is the ideal output of the ith neuron when the P sample is input, and oPj is its actual output. The number of nodes in the hidden layer of the three-layer network is not arbitrary. According to the empirical formula, the number of nodes in the hidden layer l can be given: l¼
pffiffiffiffiffiffiffiffiffiffiffiffi m þ n þ a; a ¼ 1; 2; :::; 10
ð3Þ
3 Experimental Design of TQ Evaluation System Based on AI 3.1
Data Acquisition
This paper selected 18 teachers as the test objects, including 3 with senior professional title, 8 with intermediate professional title and 7 with junior professional title. A team of 5 experts made an objective evaluation of each teacher's TQ index at all levels. The input samples of the neural network are normalized. In the neural network, sigmoid function is used, and the input should be normalized to between [0,1]. The data after normalization processing are shown in Table 1.
Table 1. Data after normalization processing 1 2 3 4 5 6
3.2
X1 0.97 0.85 0.88 0.98 0.93 0.85
X2 0.95 0.90 0.94 0.96 0.94 0.93
X3 0.93 0.82 0.85 0.94 0.85 0.88
X4 0.98 0.85 0.88 0.96 0.87 0.85
X5 0.95 0.81 0.97 0.96 0.86 0.97
System Development Environment
The system designed in this paper is built on the Windows development platform, and VC6.0 is used as the main programming tool to realize the basic framework of the system. Using SQLServer to achieve the database programming. Student - and teacherside applications and networks are connected using C/S structures. This paper uses Matlab tools to carry out simulation experiments on TQ evaluation.
152
G. Liu
4 Discussion of Experimental Results of TQ Evaluation System Based on AI 4.1
Simulation Results and Discussion
The last three sets of data in the original data were used to simulate the established model, and three sets of data were input to obtain the network output value. The network output and expert evaluation errors of three sets of simulation data were compared. The simulation results are shown in Table 2 and Fig. 1. Table 2. Simulation results 4 5 6 Expert evaluation 0.916 0.856 0.668 Network output 0.9377 0.5492 0.9317 Error 0.0073 −0.0072 −0.00157
1
test result
0.8 0.6 0.4 0.2 0 -0.2
Expert evaluation
Network output Test index 4 5
Error 6
Fig. 1. Simulation results
The BP network model can basically replace the expert's grasp of each weight, and has accurate ‘‘expert thinking’’ to judge the score of each indicator system. The TQ evaluation model of higher vocational colleges (HVC) based on BP neural network has been successfully established. This model overcomes the complexity of traditional evaluation of teachers' working process. It is convenient, accurate, reliable and fast. The simulation results and identification accuracy of the evaluation are more in line with the reality. In addition, the process of building the model is easy to realize the dynamic updating of weights, which can reflect the weight changes of each evaluation standard in time. If you are not satisfied with the learning samples given, or think that the test results differ a lot, you can simply delete the unsatisfactory samples or input
Information Platform for Classroom Teaching Quality Evaluation
153
new samples to conduct training again and evaluate with the new weights obtained. Moreover, it can not only evaluate the comprehensive situation of teachers, but also evaluate only a single factor, which improves the scientificity, effectiveness and effectiveness of the evaluation. The empirical research shows that BP neural network can effectively solve the dilemma of TQ evaluation in HVC, meet the requirements of TQ evaluation, and make up for the deficiency of traditional TQ evaluation. It is a method that can reasonably predict and effectively control TQ. 4.2
Analysis and Discussion of System Test Results
At present, the design and implementation of the student-end system has been completed. The purpose of the student-side system design is an intelligent teaching system that can run on a single machine. The system can provide a friendly and convenient interface to promote students' independent learning. Meanwhile, the experimental data of students' cognitive ability can be obtained through exercises, so as to understand students' different learning situations and carry out personalized guidance. The test subjects were invited in for speech training to help the system recognize defined phrases. After many times of training, the test results are shown in Table 3 and Fig. 2: Table 3. Test results
test result
Type the command Single command Compound command Multiple compound command Total
90 80 70 60 50 40 30 20 10 0
Number of orders Correct command count Correct 30 27 0.90 25 21 0.84 30 22 0.73 85 70 0.82
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 Single
Compound
Multiple compound
type of command Number of orders Correct command count Fig. 2. System test results
Total
Correct
154
G. Liu
As can be seen from the test results, after training, due to the definition of the phrase, the single command recognition is relatively high. The recognition of multiple compound commands is based on the recognition combination of a defined phrase, and the recognition rate decreases. For this system, it is feasible to define commands in terms of phrases. In short, the student-side system is basically implemented and more stable.
5 Conclusion Education is the foundation of a hundred years’ plan. Through empirical and theoretical research, this paper analyzes the current situation of TQ evaluation in HVC, puts forward the index system of TQ evaluation in HVC, and introduces the related characteristics of BP neural network and MATLAB toolbox. BP neural network and the plight of TQ evaluation are found. It has certain theoretical and applied value to improve the evaluation and control of school TQ. It can be seen from this paper that the application of BP network based on MATLAB neural network toolbox in the modeling of TQ evaluation in HVC can solve the problem of subjectivity of index weight well and make the evaluation process reflect “expert thinking” more. Especially when students evaluate the TQ, they can better correct the deviation in the evaluation process. The TQ evaluation model of HVC based on neural network gives full play to the superiority of neural network and is a new method of TQ evaluation and control. Acknowledgement. Information Assurance Scientific Research Project of Shaanxi Provincial Department of Education in 2020: Research and Practice of Education Statistics Platform Construction Based on Teaching Quality Assurance Construction System, item No. 20JX002.
References 1. Zhang, X., Wang, J., Zhang, H., et al.: A heterogeneous linguistic MAGDM framework to classroom teaching quality evaluation. Eurasia J. Math. Technol. Educ. 13(8), 4929–4956 (2017) 2. Mao-Hua, S., Yuan-Gang, L., Bing, H.: Study on a quality evaluation method for college english classroom teaching. Fut. Internet 9(3), 41 (2017) 3. Wang, B., Wang, J., Hu, G.: College English classroom teaching evaluation based on particle swarm optimization – extreme learning machine model. Int. J. Emerg. Technol. Learn. 12(5), 82 (2017) 4. Esmael, S.: Teaching quality evaluation: online vs. manually, facts and myths. J. Inf. Technol. Educ. Innov. Pract. 16(1), 277–290 (2017) 5. Peng, Q.: Optimization of physical education and teaching quality management based on BP neural network. Boletin Tecnico/Tech. Bull. 55(7), 643–649 (2017) 6. Jeavons, A.: What is artificial intelligence?[J]. Research World 2017(65), 75 (2017) 7. Bundy, A.: Preparing for the future of Artificial Intelligence. AI Soc. 32(2), 285–287 (2017) 8. Lu, H., Li, Y., Chen, M., et al.: Brain intelligence: go beyond artificial intelligence. Mob. Netw. Appl. 23(2), 368–375 (2017)
Information Platform for Classroom Teaching Quality Evaluation
155
9. Moravčík, M., Schmid, M., Burch, N., et al.: DeepStack: expert-level artificial intelligence in heads-up no-limit poker. Science 356(6337), 508 (2017) 10. Makridakis, S.: The forthcoming artificial intelligence (AI) revolution: its impact on society and firms. Futures 90(jun.), 46–60 (2017)
University Professional Talents Training on Student Employment and School Quality in the Big Data Era Zhaojun Pang(&) Xi’an Fanyi University, Xi’an 710105, Shaanxi, China [email protected]
Abstract. With the expansion of the scale of higher education in my country, the employment situation of college students is becoming increasingly severe. The basic way to improve the employment situation and the quality of employment is to further improve the quality of personnel training. In the context of the era of big data, my country’s college teaching has also made some reforms in response to the current era of big data. The purpose of this article is to study the impact of professional talent training in colleges and universities on student employment and school quality in the era of big data. Therefore, this article takes the university undergraduate talent training model as the comparative research object, and uses literature research, questionnaire method, comparative analysis, mathematical statistics and other methods to grasp the current situation, similarities and differences of the undergraduate talent training model in ordinary universities under the background of big data. The advantages and disadvantages of the current university teaching model under the background of big data, taking the current situation of university professional education reform as the starting point, through the inevitable discussion that the application of modern education technology is inevitable to deepen the university professional education reform, it is proposed to apply modern education technology to deepen the university Several countermeasures and suggestions for professional education reform. Experimental research results show that, in the context of the rapid development of this big data era, the popularization and improvement of college education also needs to open up new ideas to innovate, and it also requires every teacher and student to actively think and challenge. Keywords: Big data Professional talent training in colleges and universities Student employment School quality
1 Introduction Big data has changed the society and the times, and it is also destined to change the current comprehensive situation of education development and teaching mode, and especially in college education, college education must make positive changes in response to the development of the times [1]. Especially for professional education and teaching in colleges and universities, the cultivation of relevant talents directly affects the development of the times. In order to further meet the comprehensive and specific © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 156–163, 2021. https://doi.org/10.1007/978-981-33-4572-0_23
University Professional Talents Training on Student Employment
157
needs of the development of the times and social progress, it is necessary to objectively recognize the role of big data in this process. The role and value, and then actively apply relevant measures to reform teaching, promote the comprehensive strengthening and improvement of teaching quality, teaching effectiveness, and talent training effectiveness, and optimize the final teaching effect. College professional education needs to make active and effective changes in the context of big data [2, 3]. Especially at this stage, the development of ‘‘big data’’ is very rapid, and its role and value are relatively obvious. In the specific teaching and related operations, it is necessary to actively construct and construct an information platform to form a good online correlation. Teaching management mode, improve the actual effect and final effect of final management and teaching work [4, 5]. Teaching activity management now needs to gradually become diversified and comprehensive, especially in practical teaching and other aspects. It is necessary to fully recognize the value and significance of related technologies, effectively carry out practical applications, and then improve the final management and teaching effects [6]. The training of professional talents in colleges and universities is an important part of my country's education and teaching, and an important way to become a talent [7]. In response to this series of issues, many scholars currently engaged in this area have some views on this. In order to solve the impact of professional talent training in colleges and universities on student employment and school quality in the era of big data, three reasons should be addressed at the same time: to build a professional setting and talent demand forecasting system, and to reform the professional setting to coordinate the relationship between professional education and general education, and implement diversification The talent training strategy, accelerate the marketization process of talent training, build and improve the university self-discipline mechanism [8, 9]. In the reform of the education and teaching model of colleges and universities, the work of professional talents must be actively integrated into the overall situation of education reform and development, and the development goals, paths, and power structure construction of professional education reform and development must be integrated. It is willing to be a supporting role, and it does not necessarily need to be a separate project. Do it. Our overall goal is to promote the overall development of human beings and the overall progress of society [10].
2 Method 2.1
Use Big Data as a Teaching Background to Speed up the Construction of a Professional Education Network Platform in Colleges and Universities
Using Internet technologies such as big data and cloud computing to continuously enrich the professional education network teaching platform, thereby changing the backward state of professional education and teaching. On this basis, the transition from teacher-centered to student-centered, give full play to the role of student-based learning. At the same time, using computers and network multimedia to create learning mechanisms, using platform feedback functions, real-time interactive communication
158
Z. Pang
between teachers and students, and accelerate the transformation of traditional professional teaching methods. Further use the multimedia network to speed up resource construction, enrich professional teaching content, and interconnect with other schools to share high-quality resources. When designing professional online teaching content, we should create a wealth of templates and teaching resources, and easily realize the construction of the website, so that it can solve most of the professional subject database, courseware library and other content. Secondly, make full use of the network for dynamic management of professional network teaching. On this basis, fully develop professional online teaching courseware, establish high-quality professional education online teaching courses, and do a good job of organizing professional materials and network management. 2.2
Change the Traditional Teaching Mode and Skillfully Use the New Era Network Teaching Methods
In response to the rapid development of big data technology and the needs of professional teaching in colleges and universities, we should fully transform the current professional online teaching methods and use the rich network to realize students' independent learning. To be specific, from network media management to class teachers, they should give full play to the reforms brought by network technology to professional teaching and change teaching concepts. At the same time, professional teaching managers and teachers in colleges and universities should fully promote the use of the network platform, and design targeted professional education and teaching methods for this platform to promote students' independent learning, thereby changing the way of learning. In addition, pay attention to cultivating students' interactive practical ability, give full play to the role of the network platform, and promote the effective development of students’ professional activities. The construction of professional online teaching platforms in many schools is based on teaching evaluation and declaration. As a result, the platform construction is not suitable for students, and the effect of professional teaching has not been further improved. In addition, real-time resource exchange between universities can enable high-quality professional teaching resources to be shared through a unified platform.
3 Experiment 3.1
Experimental Research Objects
In order to be able to analyze the impact of the reform methods and methods of the professional teaching model based on the big data in a more in-depth manner, this article chooses two classes in a university to conduct experiments, which are divided into online classes and control classes, each with 40 students, two There are a total of 80 students in each class. After the staged teaching and learning, conduct knowledge tests on them and conduct online teaching to students in online classes. Questionnaires will be conducted on the impact of big data in teaching, to solve some problems
University Professional Talents Training on Student Employment
159
existing in the current practical teaching mode. This research is aimed at the junior college students conduct surveys and research. 3.2
Experimental Research Design
This research focuses on practical teaching in two classes of the university. The online class adopts the new method of reforming the current professional teaching model based on the background of big data, while the control class adopts the traditional professional teaching model. After the practical teaching is completed, compare the knowledge mastery of the two classes and analyze the comparative method. Then, the ‘‘Questionnaire on the Effect of College Students' Practice Teaching’’ was issued to the students. This practice survey is aimed at a series of links such as students' practice teaching courses, and investigates the status quo of the college professional teaching mode. In the end, wait three months after graduating from the senior year, and then do a questionnaire survey on the employment rate for a comprehensive analysis.
4 Results 4.1
Experimental Investigation and Analysis Results
Table 1. The impact of professional education in colleges and universities on talent cultivation, employment and school quality Class quality Control class Network class Online classes have views on reforming education
General Better Good
Afterschool quality General Better Good
Learning interest
Education quality
Employment rate
General Good Good
General Good Excellent
80% 92.5% Good
It can be seen from Table 1 that we conducted visits and surveys to the students of the two classes respectively, and then classified the research according to the results of the survey. We found that the atmosphere of the online class and the after-school tutoring are much better, so the overall result is high quality of school running and employment. The rate is high. In the end, we separately discussed the online class's opinion on the reform, and concluded that the reform will arouse the enthusiasm of the students, resulting in double the result with less effort. According to the data in Fig. 1 and Fig. 2, comparing the data in Fig. 1 and Fig. 2, it can be found that the so-called classroom atmosphere of teaching refers to a specific situation developed in the teaching process. The concrete manifestation of the effect. According to the survey results of the professional education classroom atmosphere of the online class and the control class, the enthusiasm of the control class is far less than
160
Z. Pang
6.30%
1.98% Very active active
25.41% general 66.31%
Not active
Fig. 1. Evaluation of the active atmosphere of physical education in online classes
11% 31.70% Very active active general
36.10%
Not active
21.20%
Fig. 2. Evaluation of the active atmosphere of physical education in the control class
that of the online class. We conducted visits and surveys to the students of the two classes respectively, and then classified the research based on the results of the survey. We found that the networked teaching mode adopted by the network class can quickly integrate students into the current teaching mode, while facing traditional methods. The professional learning model will be much better. 4.2
Prospects for the Cultivation of Professional Talents in Colleges and Universities in the Era of Big Data
Should universities implement general education or professional education? This has been a topic of constant debate since the establishment of the modern university. Our colleges and universities are more directly reflected in education oriented to social
University Professional Talents Training on Student Employment
161
needs and career orientation. Although the educational philosophy of cultivating ‘‘ordinary talents’’ is more in line with the most fundamental value goals of universities. However, in the modern era in disciplines and society, the division of labor has been highly improved, and graduates are facing huge employment pressure. In society, colleges and universities recruit students according to majors, and industry and society select and use talents according to majors, while general education. The replacement of education by professional education is an indisputable fact. In essence, the main structural contradiction in the employment of college students is the contradiction between the supply of talents in universities and the demand for talents in society. Due to the dialectical relationship between the two, there are opposing, unifying and interactive departments. Therefore, we believe that this contradiction is objectively inevitable. Even in a sense, sex is difficult to avoid time completely. It is also controllable and can be reduced through effort. However, the rich and complex connotation of talent supply (training) and social talent demand in colleges and universities is limited to the subject of this article, which mainly comes from the ‘‘professional’’ viewpoint of talent supply and demand. In the realistic social situation, the professional structural contradiction of college students’ employment is mainly manifested in the phenomenon of ‘‘finding people’’ and ‘‘finding people’’. At the same time, this is reflected in the large difference in the employment rate of graduates of different majors and corresponding majors. When the supply of majors exceeds the demand, it is difficult for graduates to find jobs. The employment rate and professional counterpart rate are low. On the contrary, it is not needed. The employment situation of graduates is the opposite. At this stage. Because under the huge employment pressure, the government and colleges and universities hope that graduates ‘‘get jobs first, then choose jobs.’’ 4.3
Research on the Teaching Process
In addition, with the continuous development of science and technology and the Internet, students can now access more and more things, so people's demand for learning computers and other content is becoming stronger. The problem that should be paid attention to in today's teaching is to stimulate students' thinking. The actual task of teaching and student learning requires choosing a case according to the situation, and finally completing the task of learning knowledge. When actually enlightening students, they should be asked to compare different methods and choose the one that suits the actual application. For example, in the continuous development of the Internet, in order to meet the needs of different users, it is necessary that the network must be upgraded and quality checked. Although the database can meet more needs, when the number of online users is large, certain problems will occur, so it must be based on this response speed to judge the response speed. Different databases have their own advantages, so in actual applications, you should compare them in applications, and choose the more suitable application according to the actual situation. For example, there are a lot of information data on Weibo. You can use keywords to search for data information. And storage, in this way, mainly uses relational database, and the user's query information is also realized by keywords. In this process, relational databases can make great use of your own advantages.
162
4.4
Z. Pang
Thinking About Changes in the Era of Big Data
Big data is very powerful according to processing tools, it can process the most data in the shortest time, and at the same time, it can manage data reasonably. Generally, big data is characterized by large quantity, high liquidity, and many types. In big data, it can be divided into structured data, semi-structured data and unstructured data. These characteristics of big data are actually a violation of the difference between big data and traditional data. With the continuous development of Internet technology, the application scope of big data is getting wider and wider. Its reasonable application can further improve the efficiency and quality of data processing, which brings great convenience to people's life and work. In the era of big data, the use of data has played a huge role in the transformation of social structure, people's lifestyles and ways of thinking. The value of evidence is increasingly recognized by the public. However, the widespread use of data will inevitably bring about some negative impacts, which will bring about huge changes in the traditional industry, that is, the way of thinking within the industry. But again, this is also a huge opportunity and challenge. To seize this opportunity is to grasp Live this era. So what should we do? This is a question worth pondering. We are at the forefront of the next era. Everyone is on the cliff, carefully climbing the peak of the next era. At this time, a stagger will lead to the end of the disaster, so we should slowly change. First of all, change is inevitable. Those who conform to the times will live, and those who stick to the rules will die, but we must not undergo too drastic reforms. We are still in the transition of the old and new eras. Excessive reforms will make this swaying ship. Overturning in the waves, we first implement the pilot program, and then observe the results. The best way is to open a controlled experimental class for some majors in the school. It is best to start from the freshman year and continue to observe after graduation and employment, and then gradually reform and implement it. In the early days of the big data era, the collision with traditional industries was not serious. When the era progresses, huge changes will inevitably occur. Therefore, after the first batch of pilots are over, we will learn from them and carry out reforms to benefit the entire school. Make changes. Because professional education is still superior to general education, it is necessary to increase professionalism, and with the development of the times, there will be new developments in all walks of life in the future. When new industries appear, the old ones will leave. This is Normally, the majors of colleges and universities will also change in the future. This is all expected, and with the development of technology, big data will have a deeper connection with all walks of life. Finally, there is me in you and me in you.
5 Conclusion In the era of big data, it not only provides richer content for the teaching of colleges and universities, but also changes the traditional teaching methods and concepts, which are significant changes for teachers and students. Therefore, teaching should be peopleoriented, and personalized employment guidance should be strengthened. If the employment guidance of thousands of teachers is very different, the results of employment guidance are often unsatisfactory. Employment guidance should be
University Professional Talents Training on Student Employment
163
‘‘people-oriented’’ and follow the principles of universality and individualization. Help them design suitable career goals, provide graduates with detailed employment consultation, and help them find employment. All in all, in the continuous development of recent years, big data teaching has received more and more attention from schools. However, due to the shorter use time of this method and the different foundations of students in different colleges, it is still used in practical applications. There will be some problems, which will affect the actual teaching. Therefore, in order to further improve the actual efficiency and quality of big data teaching, it is necessary to clarify the university's own talent training goals, and use these methods to carry out teaching work, so as to cultivate more talents for the development of society and the country. Acknowledgement. This work has been funded by Xi' an Translation Institute and Shaanxi higher education teaching reform research project: research and practice of ‘‘enrollment training employment’’ linkage mechanism of private colleges and universities, Project No.: 19bz063.
References 1. Xu, W., Zhou, H., Cheng, N., et al.: Internet of vehicles in big data era. IEEE/CAA J. Automatica Sinica 5(1), 19–35 (2018) 2. Wang, Y., Kung, L.A., Byrd, T.A.: Big data analytics: understanding its capabilities and potential benefits for healthcare organizations. Technol. Forecast. Soc. Change 126(JAN), 3– 13(2018) 3. Wang, X., Zhang, Y., Leung, V.C.M., et al.: D2D big data: content deliveries over wireless device-to-device sharing in large scale mobile networks. IEEE Wirel. Commun. 25(1), 32– 38 (2018) 4. Liu, B.Y., Xu, S.W.: Research on the integration of national defense education, college enrollment and talent training in Colleges and universities. Educ. Teach. Forum 000(019), 47–49 (2018) 5. Rosan, M., Erin, R., Emily, P.: Student employment as a high-impact practice in academic libraries: a systematic review. J. Acad. Librarianship 44(3), 352–373 (2018) 6. Grimm, K.L.: Prelicensure employment and student nurse self-efficacy. J. Nurses Prof. Dev. 34(2), 60–66 (2018) 7. Kocsis, Z., Pusztai, G.: Student employment as a possible factor of dropout. Acta Polytechnica Hungarica 17(4), 183–199 (2020) 8. Ramamurthy, S., Sedgley, N.: A note on school quality, educational attainment and the wage gap. East. Econ. J. 45(3), 415–421 (2019) 9. Breazeale, G., Webb, M.D., Rohe, W.M.: Evaluating school quality of housing choice voucher recipients. Southeast. Geogr. 60(2), 100–120 (2020) 10. Mishura, A.V., Shiltsin, E.A., Busygin, S.V.: Social aspects of impact of school quality on housing prices in regional centre of Russia. Voprosy konomiki/Akademiia nauk SSSR, Institut konomiki 7, 52–72 (2019)
The Security Early Warning System of College Students’ Network Ideology in the Big Data Era Wei Han(&) Xi’an Fanyi University, Xi’an 710105, Shaanxi, China [email protected]
Abstract. Under the rapid development of the current era of big data, as an extremely important job, the ideological work of college students has always been the top priority of everyone’s attention, and the development of ideological work related to college students has caused the party Highly valued. In this paper, based on the big data era in Beijing, an in-depth research and discussion on the research of college students’ ideological safety early warning system is the subject. This article first sorts out related concepts such as cyber ideology, cyber ideological security, cyber security early warning system, etc. based on college students in the big data era. Maintain the understanding of the important role played by college students in the process of network security ideology security. Secondly, by investigating the current situation of the security earlywarning system of my country’s college students’ network security ideology, find out the problems faced, analyze the reasons, and optimize the security earlywarning system of college students’ network security ideology. Through the research on the security early warning system of college students’ network security ideology, we can better maintain the ideological security of our college students. The experimental research results show that. This research is for the current era of big data. It is of great significance to maintain the ideological security of college students and to develop the ideological work of network security in universities. Optimizing the security early warning system of college students’ network security ideology is a long-term process, which requires continuous innovation in practice. Keywords: Big data awareness warning
Network security Network awareness Security
1 Introduction As a arduous task in the new era, college students’ network security ideology plays a very important role in maintaining the security of college students’ network security ideology. How to strengthen personal ideals and beliefs under the malicious infiltration of Western culture and the continuous interweaving of multiple cultures, and enhance college students’ own ideological security concept of network security [1, 2]. It is of great significance for maintaining social stability and unity and leading the correct value orientation [3]. Through the previous article’s important role in the security of college © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 164–171, 2021. https://doi.org/10.1007/978-981-33-4572-0_24
The Security Early Warning System of College Students’ Network
165
students’ cybersecurity ideology, and part of the explanation of the current threats, it is not difficult to find that as the country becomes stronger and stronger, the cybersecurity ideological threats faced by college students have become more and more intense, and the forms and methods have become more diverse.化 [4]. From a certain point of view, cyber security ideological security issues are more likely to cause continuous and shocking damage than military security. With the passage of time and the continuous expansion of the influence of globalization, the connotation of college students’ network security ideology security will become more complex and diverse [5]. The early warning system plays a very important role in actual operation. The academic community mainly divides the functions of the early warning system into two parts, one is early warning analysis, the other is pre-control countermeasures [6]. The early warning analysis function is mainly realized through early warning monitoring, early warning information collection and management, construction of early warning evaluation system, and early warning evaluation. Early warning monitoring mainly conducts comprehensive and systematic monitoring of the environment and dynamic behavior of the target event. The collection and management of early warning information is mainly to sort, filter, identify and store the collected information [7]. The content of the early warning evaluation system is mainly to conduct comprehensive early warning through the determination of early warning rules, methods and methods. The early warning signal in early warning evaluation is also a common early warning expression in life. People usually judge the severity of the target event according to the warning level. If the early warning analysis is the basis for the early warning system to realize its functions, then the pre-control countermeasures are the goals that the early warning system should achieve [8]. Therefore, ensuring the security of network ideology plays an important role in maintaining the stability of state power [9]. By analyzing the influencing factors such as the cause, course, and result of cyber ideological security incidents, as well as the characteristics of the incident, combined with the semantic knowledge base for cyber ideological security and objective network data, the evaluation indicators have been determined and a more scientific and reasonable The early warning indicator system can provide early warning of the level of network ideological security events. The network ideological security incident early warning indicator system is divided into two levels. The first-level indicators are event heat, media, participants, geographic location, scope of influence, and influence. The selection of the second-level indicators fully considers observable objective data. Characteristics, more comprehensively reflect the evolution trend and scope of influence of cyber ideological security incidents. With the popularization and development of the Internet, bad forces use the Internet platform to instill the values of Western capitalist countries in our people, and cyberspace has become the main battlefield of ideological struggles [10].
166
W. Han
2 Method 2.1
Early Warning Indicator System
There are m early warning indicator systems, each indicator supplies n original data, and the early warning indicator system processes these data to obtain a standard matrix: T ¼ r ij m n
ð1Þ
Among the m indicators, the i-th indicator is defined as: Ei ¼
1 lij ln lij ; i ¼ 1; 2; . . .; m: ln n
ð2Þ
Among them: r ij lij ¼ Pn
i¼1 r ij
Wi ¼
1 Ei P ; 0 Wi 1 m m o þ 1 Ei
ð3Þ ð4Þ
For the safety accident early warning indicators of the corresponding heavy, medium, and light alarm states, the dimensionless processing is carried out by Eq. (1), and the dimensionless value xi is obtained, and the weight is obtained by multiplying the index and the corresponding quantitative value by weighting and summing average value: D¼
m X
W I xi
ð5Þ
i¼1
RðtÞ ¼ PfxðtÞ\½xg uðtÞ ½u PFR ðtÞ ¼
uðtÞ uð t Þ PRSD ðtÞ ¼ 1 ½t ½u
ð7Þ
t 1 Þ2 ðt2 t1 Þ 12 ðt3 t1 Þ
ð8Þ
tr t1 ¼
2.2
ð6Þ
1 2 ðt 2
Enhancing the Cognition of Contemporary College Students’ Ideological Safety Early Warning System
Under the big data environment, college students’ ideological security faces various challenges. Ideological discourse power is not easily mastered in a calm environment, but in the struggle with various wrong ideas, continuous innovation and development,
The Security Early Warning System of College Students’ Network
167
thus strengthening the ideology of college students Safety awareness is safe. The ideological safety education ability of college students is the core element to maintain the mainstream ideological safety. Only by establishing a scientific and innovative education system and continuously improving the ideological safety education ability of college students can they have the right to speak in the ideological safety struggle. Innovating college students’ ideological safety education methods and enhancing the right to speak Discourse is the carrier of ideology, and the effective expression of discourse needs a successful carrier to achieve. 2.3
Relying on Big Data to Realize Safety Education of College Students’ Safety Ideology
In the face of endless cyber ideological security crises, people should seize the opportunities brought by the development of science and technology, and rely on big data technology to promote cyber ideological security and precise governance. Relying on big data technology to improve the precision level of network ideological security governance, the key is to implement big data strategy, establish big data thinking, and make good use of big data technology. Only through the innovation of ideological discourse expression and the close relevance of ideological safety education and real life can it be understood and accepted by college students more widely. On the basis of ensuring the dominant position of mainstream ideology, ideological safety education for college students must grasp the profound changes in the public opinion environment and communication methods in the new era, accelerate the integration of traditional media and new online media, and guide college students to recognize and accept mainstream ideology.
3 Experiment 3.1
Experimental Investigation Objects
In order to have a more in-depth analysis of the research on the security early warning system of college students’ network ideology in the current era of big data, this article selected 100 students from the University of Political Science and Law divided into two teaching classes for a two-hour theme riverside activity. After the speech is completed, to conduct a questionnaire on them to further understand the situation of contemporary college students’ awareness of network security. On the basis of comprehensively combing the research results of the research objects on the research theme of this article, this article analyzes the problems and reasons in the network ideological security education based on big data technology. Then this article attempts to use some of the characteristics of big data to explore the cyber ideological security education countermeasures based on big data. The research object of this article is to investigate and research 100 sophomores from the University of Political Science and Law.
168
W. Han Table 1. Questionnaire statistics Valid questionnaire Missing questionnaire Total Questionnaire 97 3 100 Percentage of valid questionnaires Proportion of missing questionnaires 97% 3%
3.2
Experimental Research Design
In this survey, 100 experimental questionnaires were selectively distributed to the students of the industrial university through the questionnaire after the lecture, and the experimental questionnaires were compiled on the theme of school community building and professional practice skills building. A total of 80 questionnaires were distributed and 97 valid questionnaires were returned, with a recovery rate of 97%. Secondly, understand the current status of the construction and function of the college students’ network awareness system from the perspective of the university’s ideological work, objectively analyze its existing problems, explore the causes of the problems, and draw on the benefits of the construction of the college students’ network awareness system since the development of the big data era. Experience and foreign related research on the early warning system and function of cybersecurity ideology, and put forward countermeasures to strengthen the safety construction and function of college students from the perspective of college ideological work (Table 1).
4 Results 4.1
Analysis of Experimental Research Findings
13.70%
11.30%
17.21%
Always Often
36.59%
Occasionally Rarely
22.20%
Never
Fig. 1. Educational approaches to bring students’ online ideology in the era of big data
As shown in Fig. 1, the rapid development of the big data era has expanded the important ways for college students to form their network security ideology, and brought important changes to the construction of college students’ network security
The Security Early Warning System of College Students’ Network
169
early warning system. In the survey of “Do you often know the current domestic network security construction methods?”. This survey shows that 36.59% of the people who know about it frequently take the first place. We can clearly see from the figure that, except for the 11.3% of students who never pay attention to accidents, the other 88.7% of students will learn about the security system of the current college students’ network ideology. They can also effectively help them to form their own network security awareness system. From the face-to-face survey of them, I learned that decisions are made based on all data information, not sample data information. To strengthen the overall thinking in cyber ideological security governance, we should start from three aspects: First, collect data related to cyber ideological security through multiple channels, establish a large-scale information database, and accurately study and judge cyber ideological security on this basis. Situation; the second is to conduct extensive investigations and studies to understand the ideological status and psychological needs of netizens, and to provide targeted ideological education to netizens to achieve precise supply of ideological and political education (Fig. 2).
80.00% 60.00% 40.00% 20.00% 0.00%
72.13% 8.15%
9.82%
10% Percentage
Fig. 2. Will you actively learn network security awareness and establish a safe early warning system
As can be seen from the data shown in Fig. 1, cyber security awareness, as a vivid new cultural form, has subtly affected the way of life and learning of college students, and has a certain degree of influence on the mainstream ideology of college students. 72.13% of the classmates always browse, 8.15% of the classmates browse frequently, and 9.82% of the classmates browse occasionally. The results of this survey show that the red culture is a cultural form that has always been ignored by college students. After using the carrier of Internet culture, It has gradually begun to be valued by college students. Strengthening the governance of ideology can achieve the persistence and consolidation of the guiding position of Marxism. Big data has brought great changes to people’s daily work, life and way of thinking. Especially with the popularization and application of the Internet, although the mainstream ideology is constantly promoted, people pay more attention to various social thoughts on the Internet. Therefore, it is necessary to actively optimize and innovate the dissemination and expression mechanism of mainstream ideology. In actual work, ideological workers need to pay attention to the application of big data technology, in order to analyze people’s
170
W. Han
ideological changes and development trends, and timely adjust and update the communication mechanism, so that mainstream ideological communication can penetrate people All aspects of work and life. In addition, it is necessary to use big data technology to conduct scientific analysis and accurate judgments on the web search preferences and personal habits of different groups, to better understand the object of ideological governance, and to take relatives according to the situation of different groups.
5 Conclusion In the current big data era, college students’ ideological safety education is ultimately still a competition for discourse power. Through innovative discourse expression, optimizing discourse dissemination, improving discourse system, consolidating discourse production, enriching discourse connotation, giving full play to the effectiveness of college students’ ideological safety education, and effectively controlling college students’ consciousness The right to speak of ideological safety education for college students to ensure that the ideological safety education of college students can take the pulse of social reality, reflect the characteristics of the new era, and be close to the demands of college students. It is the focus and goal of doing a good job in ideological safety education for college students in the era of big data. The formulation of response strategies has become an urgent problem. Based on the semantic knowledge base for network ideology security and network objective data, a network ideological security event early warning indicator system is constructed, and the analytic hierarchy process is used to determine the weight of the indicator parameters of the early warning indicator system and determine the warning level corresponding to the event. In the future, we will further improve the early warning indicator system and assist relevant departments to establish effective event monitoring and early warning mechanisms. Cybersecurity plays a very important role in ideological security. In the Internet era, the traditional ideological field of struggle has taken place. The main battlefield of public opinion struggles has changed from traditional media to the Internet. The Internet has become the main battlefield of ideological struggles without gunpowder. Strengthening the construction of ideological security in colleges and universities. Acknowledgement. This work was financially supported by Xi’an Fanyi University, Construction project of counselor’s studio of Xi’an Fanyi University “IPE and guidance studio for college students – ideological and theoretical education and value guidance” and University-level research project. Item: “Research on the Early-Warning System of College Students’ Ideology Security”, Project No.: 20B26.
References 1. Xu, W., Zhou, H., Cheng, N., et al.: Internet of Vehicles in Big Data Era. IEEE/CAA J. Automatica Sinica 5(1), 19–35 (2018)
The Security Early Warning System of College Students’ Network
171
2. Shen, B., Choi, T.M., Chan, H.L.: Selling green first or not? a bayesian analysis with service levels and environmental impact considerations in the big data era. Technol. Forecast. Social Change 144(JUL.), 412–420 (2019) 3. Byeon, G., Hentenryck, P.V.: Unit commitment with gas network awareness. IEEE Trans. Power Syst. 35(2), 1327–1339 (2020) 4. Bai, H., Chen, W., Wang, L., et al.: Naive echo-state-network based services awareness algorithm of software defined optical networks. China Commun. 17(4), 11–18 (2020) 5. Bo, Y., Wang, Y., Wan, Z.: Optimizing the WEEE recovery network associated with environmental protection awareness and government subsidy by nonlinear mixed integer programming. J. Adv. Trans. 2019(12), 1–21 (2019) 6. Lewandowska, I., Drzewicki, A., Wendt, J.A.: Awareness of the Cittaslow network among students in Olsztyn and Gdańsk cities. Polish J. Nat. Sci. 34(4), 559–573 (2019) 7. Manirabona, A., Boudjit, S., Fourati, L.C.: NetBAN, a concept of network of BANs for cooperative communication: Energy awareness routing solution. Int. J. Ad Hoc Ubiq. Comput. 28(2), 120–130 (2018) 8. Dagvadorj, A., Kim, H.S.: A study on dessert awareness through semantic network analysis. Culinary Sci. Hosp. Res. 25(8), 62–70 (2019) 9. Hanus, B., Windsor, J.C., Wu, Y.: Definition and Multidimensionality of security awareness: close encounters of the second order. ACM SIGMIS Database 49(1), 103–133 (2018) 10. Hanus, B., Windsor, J.C., Wu, Y.: Definition and multidimensionality of security awareness: close encounters of the second order. Data Base Adv. Inf. Syst. 49(apr.spec.), 103–132 (2018)
Investigation and Research on the Potential of Resident User Demand Response Based on Big Data Xiangxiang Liu1(&), Jie Lu1, Qin Yan2, Zhifu Fan2, and Zhiqiang Hu2 1
2
Power Supply Service Management Center of State Grid Jiangxi Electric Power Co., Ltd., Nanchang 330001, Jiangxi, China [email protected] State Grid Jiangxi Electric Power Co., Ltd., Nanchang 330077, Jiangxi, China
Abstract. With the development of the times and the progress of society, the development and change of the demand response potential of China’s residents are facing unprecedented challenges. In today’s big data era, the combination of big data technology and the potential analysis of demand response of China’s residential users has become the inevitable demand of the development of the times. Therefore, in order to better make the demand potential of Chinese residents conform to the development trend of the times, this paper deeply studies the business development trend and status quo of the Internet in the demand analysis and response of residents in recent years through the technology of Internet and big data, and analyzes the potential of demand analysis and response of residents in recent years, A large number of information resources about the demand analysis and response of residential users in the new Internet era are sorted out, and the business fields of residents’ demand analysis and response are re classified. The evaluation model of influencing factors of user demand response behavior is established, and the Monte Carlo simulation calculation method is used for research. It is found that time and price is the main factors influencing the demand response behavior of typical industries. Through the analysis, the accuracy rate of the big data analysis method proposed in this paper reaches 97.3% in studying the potential of residents’ demand response. Keywords: Big data Residential users Development of the times Demand response
1 Introduction With the gradual deepening of the spread and application of the concept of big data [1, 2], the technical connotation of big data is also constantly changing and expanding. Large scale data is not only a kind of complex technology in the sense, but also a kind of scientific technology and theoretical ability. That is, from the complex things and big data in the world, we can find the meaningful correlation between various things, excavate and analyze the changing laws of the world and things, the ability to accurately judge and predict the development trend of the world and things in the future. Natural data analysis is a way to change human thinking, let natural data speak for © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 172–179, 2021. https://doi.org/10.1007/978-981-33-4572-0_25
Investigation and Research on the Potential of Resident
173
human beings, and make data become a basic way and starting point to influence human thinking and social behavior decision-making. To involve more users in demand response [3–5] and improve energy efficiency measures is an important step in building a future energy system. It can be predicted that demand response will be an effective measure to balance the output of large-scale intermittent power supply in power grid, with high cost-effectiveness. Therefore, it is very important to understand why consumers are willing to contribute to the electricity market in the future. At this stage, it is very important to use the updated and broader knowledge about user load elasticity and willingness to participate in demand response as technical guidance for small and medium-sized demand response. At present, it is not uncommon to pay attention to energy system [6–8] and utilities in the field of demand response. In addition, there are many studies on the demand response potential of the industrial sector. At the same time, in the ordinary residential areas, there are also various demand response potential studies. There are many methods of demand response, such as using the load distribution information of different residential users, or conducting experiments on Households Participating in the test to implement new tariff and structure for these users. The relationship between household users’ energy use and income and their participation in demand response is discussed in the literature. The literature points out that price level, house type and climate region will affect the willingness of users to participate in demand response. Previously, the demand response and preference of smart grid end users were also studied. This paper mainly studies the survey of residents’ demand response potential based on big data technology [9, 10]. With the development of the times and the progress of science and technology, big data technology has been widely used in all aspects of people’s production and life. In order to better study the potential of residents’ demand response in this era, this paper combines with big data technology The potential of demand response of some residential users is analyzed.
2 Evaluation Model of Influencing Factors of User Demand Response Behavior 2.1
Big Data Technology
Big data technology has 4V characteristics, which emphasizes the integration and utilization of cross technology fields and multi types of data. It is based on the new generation of cloud computing and big data technology, involving a series of breakthrough progress of software and hardware fusion technology from technology to basic theory research and practical application. The technological development of China’s power big data is a comprehensive application and innovative development of China’s power big data in the new generation of power industry, including a series of core technical components, such as distributed data acquisition and storage, parallel computing, various new data analysis and algorithms, It has fully utilized and absorbed the advanced concepts and technical achievements of the new generation of domestic power system big data and power system cloud computing.
174
2.2
X. Liu et al.
Methods of Impact Assessment
Suppose the sample observation result C is composed of N independent experiments, and the i experiment result is expressed as ci ; i ¼ 1; 2:::N. The distribution of ci is represented by the random variable h, and f ðhÞ is the probability distribution function of h. Define the event B, when B occurs, the sample observation result is CB , CB is composed of NB independent experiments, in which the j observation result is expressed as cBj , i ¼ 1; 2; :::NB . The conditional random variable that cBj obeys makes h j B represent and f ðh j BÞ is the probability distribution function of h j B. Defining propositions H : B independent of random variables h. If H establish: f ðh j BÞ ¼ f ðhÞ
ð1Þ
From Eq. (1), a function f ðhÞ of cBj obeying probability distribution can be derived. A conditional probability that CB can obey the distance probability distribution f ðhÞ can be selected as a function to represent the credibility H of the proposition. Even if the conditional probability f ðCB j hÞ is used, the probability effect of conditional B on h can also be characterized. We can define the function between CB and f ðhÞ as the function of distance probability distribution dB : dB ¼ jj
NB 1 X gðy cBj Þ Pðh yÞjj2 NB j¼1
ð2Þ
Where dB is defined as the difference between two functions, and the independent variable of the function is y. Where gðyÞ ¼ ðsgnðyÞ þ 1Þ=2; sgnðyÞ are positive and negative functions. Pðh yÞ is the cumulative distribution function of h. The value of d B depends on the value of cBj , so d B is a random variable derived from cBj .
3 Experimental Background and Design 3.1
Experimental Background
Using the above methods, this paper analyzes the influencing factors of residents’ demand response behavior. From July 2018 to December 2019, the analysis and demonstration experiment of power user behavior from a region in China were carried out. The whole experiment is divided into two independent and parallel parts. This paper focuses on the experiment of large commercial power users and small residential power users, focusing on the analysis of experimental data of residential users. 3.2
Experimental Design
The low-power laboratory provides users with low-power experimental user statistical data survey, including two main parts: low-power questionnaire user survey results and energy-saving user statistics. Among them, the questionnaire survey of low-power
Investigation and Research on the Potential of Resident
175
residential users involves the population structure of households, household appliances, living habits, income distribution, and their attitude towards energy conservation, emission reduction and environmental protection. According to the survey results of low power consumption questionnaire, low-power residential users were divided into low-power control group and residential user experimental group without bias. The experimental group users are divided into four groups (experimental group A, B, C, d) to implement four kinds of peak valley TOU price. Table 1 shows the time of using the pricing mechanism implemented by the comparison group and the four experimental groups. Table 1. Participation and tariff Bill Type Number of participants at night Stable electricity price 0 12 Control group 750 12 Group A 1523 10 Group B 566 9 Group C 1103 8 Group D 766 7
daytime 12 12 11.5 11 10.5 10
peak 12 12 18 24 30 36
4 Discussion 4.1
Analysis of Influencing Factors of Demand Response Behavior
In order to help customers find out the main direct factors of the frequency change of their electricity consumption behavior and the factors affecting the electricity consumption, this paper first analyzes the frequency change of each customer’s electricity use behavior in the questionnaire, and its direct impact on the customer’s electricity consumption behavior and climate change is analyzed comprehensively, and its influence dB is calculated, Finally, through the analysis and comparison of all the direct influence frequency factors and corresponding dB , the main direct influence frequency factors of the frequency change of electricity consumption behavior in their daily life are obtained. Problem analysis the following is a detailed analysis of the problem of a washing machine affecting the frequency of use in the first questionnaire as an example. In the first period of 2019 (17:00–19:00), the experimental group will implement the peak hour daily energy price in line with international regulations. Each experimental group selects the difference of electricity consumption between 2018 and 2019 of the experimental group users in the first time period to construct a random variable of the experimental group. According to the correct answers of the questionnaire, the members and users of each experimental group are classified by the questionnaire and selected as the conditional user B. At the same time, for the daily maintenance and
176
X. Liu et al.
6
Energy saving\(kw*h)
5 4 3 2 1 0 0
10
20
30
40
50
60
70
80
90
100
Percentage of resident users(%) θ
θ|B1
Fig. 1. Use the washing machine at least once a day
frequency control of household solar washing machine, users can be divided into two categories; B1 is used more than once a day. The experimental results are shown in Fig. 1: The results in Fig. 1 show that compared with 2018, households who use washing machines more often in 2019 (experimental stage) save more electricity bills than households without washing machines during high electricity prices. It can be reasonably inferred that after the implementation of the TOU tariff mechanism, consumers can successfully reduce the peak electricity consumption by delaying the washing time and staggering the peak electricity price period. Compared with the households that don’t use washing machines frequently, the households who use washing machines often have stronger response ability to the demand and reduce the power consumption during peak hours. Through the research and analysis, the accuracy rate of the big data analysis method proposed in this paper to the demand response potential of residential users reaches 97.3%. Figure 2 lists the five factors that have the greatest impact on the residents’ electricity consumption behavior during the peak period of electricity price rise. Similar to the analysis of washing machines, other factors that may be involved in the questionnaire can also be further analyzed. The results show that households who use washing machines, dishwashers, the Internet, laptops and drum dryers are more able to cope with demand. Therefore, after the implementation of time of use pricing mechanism, the peak period of electricity consumption will be reduced more. In the peak period of electricity consumption, in order to reduce the electricity price, residents will
Investigation and Research on the Potential of Resident
177
5% 15%
29%
25% 26%
dishwasher
Washing machine
internet
Drum dryer
Notebook computer
Fig. 2. Major factors influencing electricity consumption behavior during peak periods
choose to reduce electricity consumption. This shows that time and price are the main factors affecting the demand response behavior of typical industry users. 4.2
Suggestions on Potential Survey of Residents’ Demand Response Based on Big Data
With the continuous maturity of big data and cloud computing technology, the collection and acquisition of various scientific research data will be more convenient, and the complexity of traditional basic scientific research will be greatly reduced. To a certain extent, it effectively solves many complex problems such as investment in scientific research funds, data analysis, scientific research management and so on. It provides more convenient scientific research technical support and more humanized technical consultation service for the majority of researchers and enterprises, and greatly improves the efficiency and credibility of scientific research results. In the traditional sense, demand response can guide power users to change their electricity consumption habits, reduce the peak load of power grid, and improve the safe and stable operation ability of power grid. It is one of the effective means of demand side management. With the development of the technology of automatic response system, the design and implementation of automatic demand response can effectively make the process of response completely independent of any manual operation. After receiving the signal of the automatic demand response, the user
178
X. Liu et al.
response program of the system will trigger automatically, this greatly improves the real-time performance of the automatic demand response and the reliability of the system. The real-time concept of automatic response to client requirements was first proposed by the United States. The reason for this is that, on the one hand, the proportion of residential electricity consumption in the whole society is not high, taking 2014 as an example, only 13.45%. On the other hand, this is due to the low level of residential electricity prices in China. It is of great help to the research on the potential survey of residents’ demand response based on big data. Through the research and analysis of big data technology, this paper analyzes the demand response potential of residential users, establishes the evaluation model of influencing factors of user demand response behavior, and uses Monte Carlo simulation calculation method to study. In the future, the proportion of electricity expenditure in future household consumption will be significantly increased, and the enthusiasm of residents to participate in demand response will be greatly mobilized. At present, with the popularization and application of electric vehicles combined with distributed generation and energy storage technology, residents can not only directly become the direct consumers of Internet power, but also directly become the direct suppliers of Internet power. On the premise that the Internet can meet its own actual electricity demand, it can supply power to more Internet users and participate in the transaction of Internet power market. The main meaning of Dr Type distributed generation is to refer to the combination of distributed generation and distributed energy storage technology. At present, the mature distributed wind power generation technology mainly includes distributed wind power and photovoltaic turbine power generation. In countries and regions with abundant wind turbine resources, small wind turbines can be widely installed on the user side of some large residential buildings, while photovoltaic power generation is basically free from national and regional restrictions. Therefore, photovoltaic can be widely used in the user side of some small residential buildings. When the price of distributed power grid or the incentive compensation cost of operators is higher than the price of distributed generation or the cost of power generation operators, the residential user operators can freely choose to supply power to other grid operators, so as to effectively alleviate the pressure of grid tension and power resource shortage of residential users, That is to say, users can benefit from the transaction of power resource market in which operators participate. Distributed generation combines the response mode of human resources and consumer demand, which can effectively stabilize the price volatility and intermittence of renewable energy generation, and promote the local utilization and consumption of renewable energy.
5 Conclusions Based on the technology of big data analysis, this paper investigates and studies the potential of residents’ demand response. In this era of big data, big data technology has been widely used in our daily production and life. In order to better and more efficiently investigate the potential of residents’ demand response in this era, this paper combines the technology of big data To analyze the demand response potential of residential
Investigation and Research on the Potential of Resident
179
users, through the establishment of user demand response behavior impact factor evaluation model, and the use of Monte Carlo simulation method to study. It is found that time and price is the main factors influencing the demand response behavior of typical industries, which provides a research direction for the potential of residents’ demand response. Acknowledgements. This work is supported by Science and Technology Project of State Grid (NO. 52182019000J).
References 1. Hashem, I.A.T., Yaqoob, I., Anuar, N.B., et al.: The rise of ‘‘big data” on cloud computing: review and open research issues. Inf. Syst. 47(jan.), 98–115 (2015) 2. Lv, Y., Duan, Y., Kang, W., et al.: Traffic flow prediction with big data: a deep learning approach. IEEE Trans. Intell. Transp. Syst. 16(2), 865–873 (2015) 3. Ma, K., Yao, T., Yang, J., et al.: Residential power scheduling for demand response in smart grid. Int. J. Electric. Power Energy Syst. 78(jun.), 320–325 (2016) 4. Vardakas, J.S., Zorba, N., Verikoukis, C.V.: A survey on demand response programs in smart grids: pricing methods and optimization algorithms. IEEE Commun. Surv. Tut. 17(1), 152–178 (2015) 5. Wang, Y., Chen, Q., Kang, C., et al.: Load profiling and its application to demand response: a review. Tsinghua Sci. Technol. 20(2), 117–129 (2015) 6. Rogelj, J., Luderer, G., Pietzcker, R.C., et al.: Energy system transformations for limiting end-of-century warming to below 1.5 °C. Nat. Clim. Change 5(6), 519–527 (2016) 7. Seljom, P., Tomasgard, A.: Short-term uncertainty in long-term energy system models—a case study of wind power in Denmark. Energy Econ. 49(may), 157–167 (2015) 8. Andresen, G.B., Sondergaard, A.A., Greiner, M.: Validation of Danish wind time series from a new global renewable energy atlas for energy system analysis. Energy 93(DEC.PT.1), 1074–1088 (2015) 9. Musso, D., Roche, C., Robin, E., et al.: Potential sexual transmission of Zika virus. Emerg. Infect. Dis. 21(2), 359–361 (2015) 10. Magnuson, J.J., Webster, K.E., Assel, R.A., et al.: Potential effects of climate changes on aquatic systems: laurentian great lakes and precambrian shield region. Hydrol. Process. 11(8), 825–871 (2015)
Design and Implementation of Intelligent Control Program for Six Axis Joint Robot Shuo Ye1 and Lingzhen Sun2(&) 1
Huali College Guangdong University of Technology, Guangzhou, China 2 Guangzhou Huali Science and Technology Vocational College, Guangzhou, China [email protected]
Abstract. With the increasingly modern development of industrial manufacturing field, industrial robot has become an indispensable part of modern automatic chemical plant. The intelligent control of robot has become the core content of modern industrial robot research, providing technical support for robot stable work and improving work efficiency. The purpose of this study is to explore the effect of design and implementation of intelligent control program for six axis joint robot. Keywords: Joint robot
Intelligent control Matlab/Adams joint simulation
1 Introduction With the development of industry, robots play an important role, so it is important to find effective artificial intelligence control methods to study industrial robots. Because of the importance of intelligent control program to optimize the six axis joint robot. Yu proposed fast terminal sliding mode control [1]. Lu designed a global sliding mode controller to ensure the sliding behavior of the motor drive [2]. Liu's approach law also plays an important role in eliminating chattering [3]. Kawamura designs a high gain observer [4]. Ham designed an adaptive controller based on T-S fuzzy model and used it to control the manipulator with constant load. In addition, he proposed to control the unknown load varying with time in the manipulator [5]. Although the research are relatively rich, there are still some shortcomings. In this experiment, the control analysis of trajectory tracking is made by combining two intelligent control methods with sliding mode control, so as to observe the effect of intelligent control program of six axis joint robot.
2 Intelligent Control Method of Six Axis Robot 2.1
Control of Six Axis Robot
There are two kinds of joint axes of six axis industrial robot: rotating joint and moving joint [6]. When a reference coordinate system is determined in space, the position of a
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 180–186, 2021. https://doi.org/10.1007/978-981-33-4572-0_26
Design and Implementation of Intelligent Control Program
181
point in space can be represented by an o-xyz position vector. In the reference coordinate system o-xyz, the position of point P can be expressed, as follows: p ¼ px i þ py j þ pz k
ð1Þ
To describe the posture of a rigid body is to describe the orientation information of the rigid body in space. In the global coordinate system Fxyz, the moving coordinate system Fnoa coincides with the origin of the reference coordinate system. The attitude of Fnoa in the coordinate system can be expressed by the cosine component of the unit vector on the n, o axis of the reference coordinate system Fxy. 2.2
Forward Kinematics Algorithm of Six Axis Robot Based on D-H Model
The D-H model can be used in any robot configuration, regardless of the order and complexity of the robot structure [7]. When the coordinate systems xi − zi and xi+1 − zi +1 on adjacent joints are determined, four step standard transformation is needed to transform the coordinate system xi − zi to the coordinate system xi+1 − zi+1. (1) The coordinate system xi − zi rotates h i + 1 about the zi axis so that xi and xi + 1 are parallel. Because xi and xi + 1 are both perpendicular to the zi axis, rotating the xi axis about the zi axis makes it parallel to xi + 1. (2) Translate the di + 1 distance along the zi axis so that xi and xi + 1 are collinear. In this case, since the previous step has made the axis xi parallel to xi + 1 and perpendicular to zi, it must be collinear after translation. (3) Shift ai+1 distance along xi axis so that xi and xi+1 origin coincide. (4) Rotate the angle ai+1 so that the zi axis coincides with zi + 1 axis. The transformation iTi+1 is composed of the product of four motion transformation matrices. So thematrices are right multiplied according to the order of transformation: iT i þ 1 ¼ Rotðz; hi þ 1 Þ TransðO; O; d i þ 1 Þ Rotðx; ai þ 1 Þ
ð2Þ
For the joint serial robot, the total transformation between the base and the end effector of the robot is as follows: R
T H ¼ R T 1 1 T 2 2 T 3 . . .n1 T n
ð3Þ
Where R is the base coordinate system and H is the end effector coordinate system. 2.3
Transformation Mapping of Coordinate System and Transformation Operator of Vector
For a 6-DOF Industrial robot, the coordinate origin of each joint is not coincident, and each joint has its own posture [8]. It is expressed as follows: for two coordinate systems, they are marked as {A}, {B}, where p is known in {B}, and it is recorded as BP to find the position AP in the P coordinate system {A}. It is generally divided into three cases:
182
S. Ye and L. Sun
First, when {A} and {B} are in the same position. The vector apborg can be used to represent the position of the origin of {B} relative to {A}. The expression AP of P relative to {A} can be obtained by adding vectors, where apborg is the position vector determining the origin of coordinate system {B}: AP ¼ B P þ A PBORG
ð4Þ
Second, when {A} coincides with the origin of {B}, but the attitude is different, if we want to express P in the coordinate system {A}. In this case, we can make {A} express P: by rotating the matrix AB R: A
P ¼ AB RB P
ð5Þ
Thirdly, when the coordinate origin of {A} and {B} do not coincide and their postures are not the same, comprehensive 1 and 2 are used for assessment. In general, the transformation operator is used to represent the homogeneous transformation matrix between vectors, a new vector is obtained from the displacement or rotation of points after translation or rotation: A
P2 ¼ T A P1
ð6Þ
3 Intelligent Control Experiment of Six Axis Robot 3.1
Experimental Parameter Design
This section also uses the previous six axis robot model, η = 0.1, c = 0.05,and other parameters are the same as before. We assume that the six axis joint robot model is the same as before and the parameters have been obtained. The parameter values of the adaptive law are as follows: k = diag [15,15,15,15,15]; k = diag [20,20,20,20.20,20], and фw is the diagonal matrix with 15 diagonals; we assume that the ideal motion trajectory of each joint is qd = sin (t), and the initial angle position of six joints is q0 (Table 1). Table 1. Basic paprameters of six-axis robot. Joint Joint Joint Joint Joint Joint
1 2 3 4 5 6
Length(li)/m 0.25 0.65 0.18 0.65 0.2 0.05
Quality(mi)/kg 32 32 17 18 3.7 1.3
Centroid(x,y,z)/m (−0.03,−0.04,0.4) (−0.04,−0.03,0.7) (0.16,0.18,1.15) (0.08,0.09,1.2) (0.3,0.3,1.25) (0.4,0.4,1.25)
Moment of inertia (0.6,0.4,0.3) (1.4,1.4,0.15) (0.1,0.09,0.05) (1.1,1.1,0.06) (0.04,0.03,0.03) (4,3.1,3.1)*10−4
Design and Implementation of Intelligent Control Program
3.2
183
Experimental Design
The experiment is robot trajectory planning, given the starting and the expected end point P1, P2, find an optimal or suboptimal effective path connecting P1 to P2[9]. Finally, an optimal time smoothing trajectory planning method based on improved particle swarm optimization is proposed (Fig. 1).
Start
Set feature points
Read the characteristic Knuckle variables
End
Update module
The time optimal Programming method solves the joint trajectory curve
Whether the constraint is satisfied
Period assignment to the corresponding interface of the mator
Joint analysis and motor parameters
Fig. 1. Specific flow chart of machine control
4 Discussion 4.1
Analysis of Optimal Trajectory Planning Method
In order to verify the feasibility of the time optimal trajectory planning method for robot and acceleration calculated by the improved particle swarm optimization algorithm [10], Matlab was used to simulate the robot ER16. The experimental steps are as follows: Firstly, the trajectories of 6 joints are interpolated by 5-degree non-uniform Bspline curves. The trajectory planning curve functions are obtained. The joint variables of six joints are directly given here, and the joint variables are shown in Table 2. Then, the particle swarm optimization algorithm is used to solve the optimal solution of the objective function obtained in step 2 in Table 3. 3) Replace the value obtained in step 2 into the formula to calculate the value. 4) Compared with those of the quintic B-spline curve trajectory planning without PSO. The optimized time series is obtained by using the improved particle swarm optimization algorithm and the trajectory curves obtained by the solution are continuous and smooth, and meet the kinematic constraints shown in Table 3.
184
S. Ye and L. Sun Table 2. Joint angles in joint space Joint number Joint angle at node/ (°) 1 2 3 4 1 −10 60 20 55 2 20 50 120 35 3 15 100 −10 30 4 150 100 40 10 5 30 110 90 70 6 120 60 100 25
Table 3. Kinematic constraints Joint number Angular Angular Angular velocity / (°s−1) acceleration/(°s−2) acceleration/(°s−3) 1 2 3 4 5 6
4.2
60 90 90 60 60 120
40 50 50 40 40 60
60 70 70 60 60 70
Robot Control Based on Neural Network Sliding Mode Control Algorithm
Position track of joint 2(RAD)
Firstly, the angle trajectory curve, error curve and input torque curve of the two algorithms are compared with those of the conventional sliding mode control.
1.5 1 New neural network algorithm
0.5 0 -0.5
0
10
20
30
Ideal curve
-1 -1.5
Time(s) Fig. 2. Trajectory tracking of ideal curve by new neural network sliding mode control algorithm
Design and Implementation of Intelligent Control Program
185
Position error of joint 6(RAD)
1 0.8 0.6 0.4
New neural network control algorithm
0.2
Traditional Algorithm
0 0 2 4 6 8 10 12 14 16 18 20 -0.2
Time(s) Fig. 3. Tracking error of the two algorithms
Figure 2 is the ideal trajectory curve and the trajectory curve of the new neural network sliding mode control algorithm. The curve fitting degree is very high and the curve is relatively smooth. Figure 3 shows the control input of the two algorithms. The new neural network have the ability of de chattering compared with the ordinary sliding mode control and the method has a higher fitting degree for motor control.
5 Conclusion The robot control platform developed in this paper is lack of I/O interface and other external interface modules, and lack of extended functions, It is expected that in the future study and work, we can carry out further research in these aspects. Acknowledgements. This work was supported by 2018GkQNCX116. 2018 Guangdong Provincial Colleges and Universities Youth Innovative Talent Project.
References 1. Liu, H., Huang, Y.: Improved adaptive output feedback controller for flexible-joint robot manipulators. In: 2016 IEEE International Conference (ICIA). IEEE (2017) 2. Zhang, S., Wang, S.: Parameter estimation survey for multi-joint robot dynamic calibration case study. Sci. China Inf. Sci. 62(10), 1–5 (2019) 3. Ju, J., Zhao, Y., Zhang, C., et al.: vibration suppression of a flexible-joint robot based on parameter identification and fuzzy PID control. Algorithms 11(11), 189 (2018) 4. Xuan, G., Shao, Y.: Reverse-driving trajectory planning and simulation of joint robot. IFACPapersOnLine 51(17), 384–388 (2018)
186
S. Ye and L. Sun
5. Schmidtler, J.: Human perception of inertial mass for joint human-robot object manipulation. ACM Trans. Appl. Percept. 15(3), 151–1520 (2017) 6. Kryukov, A.V.S.K., Kargapol’cev, B.Y.N., et al.: Intelligent control of the regulators adjustment of the distributed generation installation. Far East J. Electron. Commun. 17(5), 1127–1140 (2017) 7. Yassine, R., Makrem, M., Farhat, F.: Intelligent control wheelchair using a new visual joystick. J. Healthcare Eng. 2018, 1–20 (2018) 8. Takase, J.: Care training robot joint load material of basic consideration technical introduction. Trans. Jpn. Soc. Med. Biol. Eng. 56 (2018) 9. Naziha, H., Mansour, S., Abdel, A., et al.: Intelligent control of grid connected AC-DC-AC converters for a WECS based on T-S fuzzy interconnected systems modeling. Iet Power Electron. 11(9), 1507–1518 (2018) 10. Wei, H.X., Mao, Q., Guan, Y., et al.: A centroidal Voronoi tessellation based intelligent control algorithm for the self-assembly path planning of swarm robots. Exp. Syst. Appl. 85 (nov.), 261–269 (2017)
Predictive Modeling of Academic Performance of Online Learners Based on Data Mining Zhi Cheng(&) Hainan Tropical Ocean University, Sanya 572022, China [email protected]
Abstract. This paper conducts a methodological research on the academic performance of online learners. This paper applies classification models commonly used in data mining algorithms, such as random forests based on decision trees, support vector machines, neural networks, K nearest neighbor algorithms, etc., combined with data mining tools S software and statistical analysis tools, and analyzes the University of Finance and Economics Tools scores of online learners in the class of 2019. It studies the important factors that affect the academic performance of college students' online learners, and uses these factors to predict students' academic performance. Based on the distance calculation method in mathematics, the article separately studied the application of Euclidean distance correlation analysis algorithm and correlation coefficient correlation algorithm in curriculum relevance, and compared several correlation algorithms. Experimental research results show that in the era of big data, learners will accumulate a large amount of structured and unstructured data during online learning. We can explore the influencing factors of online learners' academic performance through data mining technology, and we can also use machine learning to automatically learn from the data to the academic performance prediction model. Keywords: Data mining performance
Predictive modeling Online learning Academic
1 Introduction Educational data mining is the application of data mining technology in the field of education [1]. According to the website of the Data Mining Working Group, data mining refers to the use of continuously evolving methods and techniques in a specific educational environment to explore various types of data and to mine valuable information to help teachers better understand students and improve Their level [2]. Their learning environment and provide services for educators, learners, managers and other educators, focusing on the establishment of models and discovery patterns, machine learning and data mining techniques are usually used to emphasize the prediction of learners’ academic performance, and Focus on predictive models; predictive modeling refers to the establishment of a model based on existing data, which can be used to predict future data [3, 4].
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 187–194, 2021. https://doi.org/10.1007/978-981-33-4572-0_27
188
Z. Cheng
The main purpose of this research is to train classification functions or classification models (ie, classifiers) by using training data of known students' academic performance categories, and to evaluate the performance of the model [5]. The purpose of academic performance prediction is to input the relevant data in the learning process into the predictive model to predict the possible learner level at the end of learning, so as to provide a basis for academic early warning and adjustment of teaching strategies. In recent years, the number of students has increased sharply [6]. Through data mining, it is particularly important to study the trends and problems behind student school data. At the same time, using the results to guide students can effectively improve the efficiency of school teaching management [7]. The meaning of data mining refers to the process of discovering and “mining” a large amount of data from various information databases, and extracting the information hidden in it from it [8]. The data objects it processes are generally incomplete, vague, noisy, and random daily business data. The original data is processed by data mining to obtain valuable information, which improves the utilization of information. We will pre-process the data, synthesize various data, filter valid data, and transform the format and content of the required data, data mining, model evaluation and knowledge representation are part of the knowledge discovery process [9]. One of the basic steps in the knowledge discovery process is data mining. Data mining actually obtains valuable value from a bunch of seemingly meaningless and unrelated data. This research tries to use data-driven modeling methods to mine the factors that affect the learning performance of online learners from the data, and through machine learning Automatically learn classification prediction models from data [10].
2 Algorithm Optimization 2.1
Euclidean Distance Analysis Algorithm
Euclidean distance is used in many algorithms as a measure of the distance between two variables. For the relevance of courses, the student's course performance can be considered as a coordinate point. If the two courses are more relevant, then their distance is greater. Close; on the contrary, the smaller the relevance of the two courses, the farther they are. Taking Euclidean distance as the basic algorithm, the correlation formula algorithm of two courses i and j is as follows: c Ci ; Cj ¼
1 1 ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 2 1 þ c Ci ; Cj 1 þ ðx1 x1 Þ þ ðx2 x2 Þ þ ðxk xk Þ i
j
i
j
i
ð1Þ
j
Through the analysis of related related data, the related data algorithm can be obtained as follows:
Predictive Modeling of Academic Performance of Online Learners
qx;y ¼
covðX; YÞ E½ðX lx ÞðY lY Þ ¼ rX rY rX rY
189
ð2Þ
The correlation coefficient is used to express the degree of relevance of the curriculum. The formula can be rewritten as follows: covðCi ; C j Þ E½ðCi lCi ÞðC j lCj Þ ¼ c Ci ; Cj ¼ rXCi rCj Y rCi rCj
2.2
ð3Þ
Cosine Correlation Analysis Algorithm
Cosine similarity, also known as cosine similarity, evaluates their similarity by calculating the cosine of the angle between two vectors. Assuming that a and b are two different vectors, the cosine similarity calculation formula is: cosh ¼
ab jajjbj
ð4Þ
If the coordinates of a\b are (x1, Y1), (x2, y2), then the algorithm formula can be rewritten as: x1 x2 þ y y2 ffi cosh ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffip1ffiffiffiffiffiffiffiffiffiffiffiffiffi 2 x1 þ y21 x22 þ y22
ð5Þ
Using the cosine correlation algorithm to express the relevance of the curriculum, the algorithm formula can be rewritten as: Ci Cj c Ci ; Cj ¼ jC i jjC j j
ð6Þ
Assuming that the courses C i and Cj are respectively expressed as points on the coordinate system, it can be further derived:
Pk
k k k¼1 ci cj c Ci ; C j ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffirffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Pk Pk k 2 k 2 k¼1 ðci Þ k¼1 cj
2.3
ð7Þ
Decision Tree Algorithm
Decision tree (decision tree) is a common machine learning method, it is a tree structure classification prediction model. Each internal node represents the test of the attribute, each branch represents the test output, and each leaf node (or end node) stores a class label. The top node of the tree is the root node. There are many algorithms for
190
Z. Cheng
constructing decision trees, such as the ID3 algorithm. In the construction of the tree, pruning is needed to detect and prune the noise and outliers in the training data to improve the accuracy of classifying unknown data.
3 Modeling Method 3.1
Selection and Design of Online Learners' Academic Performance Prediction Model
The goal of this article is to predict the number of questions that learners can submit correctly in the final exam of a professional basic course. Among them, the teacher sets the content and number of exam questions. The input of the model is the characteristics of the learner. The output is the number of questions submitted correctly. Define that the course has m learners, the characteristics of the learners are k, and the number of examination questions is n, then the algorithm can be defined as: X ¼ fx1 ; . . .; xk gRmk
ð8Þ
The prediction results of the model are: 1 2 n1 ; 1g Y ¼ f ðXÞ; yf0; ; ; . . .; n n n
ð9Þ
Among them, it means that when there are n test questions, the number of correct submissions by the learner is one. The number of questions in the test data in this article is 16. 3.2
Evaluation Indicators of the Model
After the model is established, the next step is to evaluate the model to determine whether the model can be effectively applied. There are many evaluation methods of the model, including classification evaluation, regression evaluation, cluster evaluation and cross-validation. This section mainly introduces the classification model evaluation methods. The recall rate is the ratio of the number of correct data obtained to the number of data in the entire sample, also known as recall rate. The defined indicators are as follows: TP: True Positive, which is the number of positive samples predicted correctly. FP: False Positive, which is the number of positive samples that are incorrectly predicted. TN: True Negative, which is the number of negative samples correctly predicted. FN: False Negative, which is the number of negative samples that are incorrectly predicted. According to the four indicators mentioned above, the algorithm formulas for accuracy and recall are as follows:
Predictive Modeling of Academic Performance of Online Learners
Precision ¼ Rceall ¼
TP TP þ FP
TP TP þ FN
191
ð10Þ ð11Þ
The value of the two is between 0–1, the closer the value is to 1, the higher the accuracy and recall rate. Accuracy and recall are sometimes contradictory. Considering this comprehensively, a very common method is F-Measure, also known as F-Score. This method is the result of weighted average of accuracy and recall. The algorithm formula is as follows: F¼
ða2 þ 1Þ Precision Rceall a2 þ ðPrecision þ RceallÞ
Among them, is a parameter. When the parameter is 1, it is the common F2 value, that is, the formula is as follows: F2 ¼
3.3
ða2 þ 1Þ Precision Rceall Precision þ Rceall
Selection of Research Objects
The data source used in the experiment of this thesis is the final results of the main subjects of the eight semesters of the university of Finance and Economics, a total of 150 students in two majors of 2018 and 2020. The three majors are: Computer Science and Technology, Electronic Information Engineering, and Communication Engineering. Before this data mining experiment, the use of data sets for testing algorithms is a very important content, and different algorithms also need different data sets to provide support for them, so that better classification accuracy can be obtained. In order to obtain more effective data, the downloaded data must be preprocessed first. Appropriate data preprocessing methods can greatly improve the accuracy of data mining. The data object of this experimental research is the score data of various courses of students of various majors of computer science in a university in Anhui during their school period. The course scores are stored in the form of EXCEL tables.
4 Data Algorithm Evaluation Results and Research Result 4.1
Application of Data Mining in Predicting Academic Performance
According to the data described in Table 1, Table 2, the results of the predictive analysis of the final academic performance of the two classes of 2018 are that, according to the data of all the above students, after different data preprocessing
192
Z. Cheng
Table 1. Classification results of students' academic performance prediction in Class 1 of Computer Science and Technology 18 Computer Class 1
Euclid 68.17%
Cosine classification 83.39%
Decision tree combination 96.17
The accuracy of the training data set Calibration accuracy Standard error
83% 0.1511
99.87% 0.0012
100% 0.0012
Table 2. Classification results of academic performance prediction for learners in Class 2 of Computer Science and Technology 18 Computer Class 2
Euclid 73.17%
Cosine classification 86.39%
Decision tree combination 92.17
The accuracy of the training data set Calibration accuracy Standard error
82.77% 0.1671
99.21% 0.1025
96.35% 0.3561
operations, Euclid's algorithm. The accuracy rate is the lowest. When applying LIBSVM, the data is processed by SMOTE and the optimal parameters are calculated at the same time. The results are more satisfactory but not stable, and the linear regression combined with the prediction of the final academic performance of online learners has been better predicted Accuracy and more stable. The current relatively novel classification algorithm is the most commonly used method in data mining, which is similar to the discrimination mentioned in the classic diversification statistics. In classification problems, the dependent variable is generally a categorical variable. If the classification problem to be solved does not meet this condition, the continuous variable needs to be discretized into a categorical variable. Under normal circumstances, discriminant analysis can directly solve common classification problems, but if the independent variables contain more categorical variables, then discriminant analysis is no longer applicable. We can try some methods in data mining to solve this classification problem. According to the data in the inflection point graph of the prediction accuracy of the Euclid algorithm according to Fig. 1, in the early stage of the current algorithm construction, the algorithm is not stable in the prediction of online learners’ academic performance, but according to the above seven experiments, we The prediction results of each experiment are compared and analyzed with the actual results respectively. The purpose is to improve the prediction efficiency of the current online learners' academic performance by the prediction model, improve the accuracy of the prediction data, and make it more accurate.
Predictive Modeling of Academic Performance of Online Learners
193
8 7 6 5 4 3 2 1 0 the first
second
third
fourth
Experimental inflection point value
fifth
sixth
seventh
Contrast inflection point value
Fig. 1. The inflection point diagram of the prediction accuracy of the Euclidean algorithm
5 Result The original data is divided into training set and test set, and finally the prediction result is obtained. This experiment obtained some characteristics of the random algorithm through the analysis of the random forest algorithm, because the data-driven predictive modeling first needs to filter out the main attributes that may affect the academic performance in the attribute set of the original data, and then select the attributes as independent variables. The academic performance is used as a dependent variable to establish a mathematical model. This research further uses a nested ensemble learning method to automatically learn classification prediction models from data. Use the random forest algorithm to train the basic classifier, use the bagging algorithm to vote on the predicted value of the basic classifier, and analyze the performance of the model. Acknowledgements. [Fundation] Educational Informatization Promote the Research of Educational Precise Poverty Alleviation in Hainan Minority Region (Serial number: RHDXB201703).
References 1. Helma, C., Cramer, T., Kramer, S., et al.: Data mining and machine learning techniques for the identification of mutagenicity inducing substructures and structure activity relationships of noncongeneric compounds. J. Chem. Inf. Comput. 35(4), 1402–1411 (2018) 2. Hong, H., Tsangaratos, P., Ilia, I., et al.: Application of fuzzy weight of evidence and data mining techniques in construction of flood susceptibility map of Poyang County, China. Sci. Total Environ. 625(JUN.1), 575–588 (2018) 3. Yu, C., Li, Y., Xiang, H., et al.: Data mining-assisted short-term wind speed forecasting by wavelet packet decomposition and Elman neural network. J. Wind Eng. Ind. Aerodyn. 175, 136–143 (2018)
194
Z. Cheng
4. Jia, Z., Li, C., Fang, T., et al.: Predictive modeling of the effect of e-polylysine hydrochloride on growth and thermal inactivation of listeria monocytogenes in fish balls. J. Food Sci. 84(1– 3), 127–132 (2019) 5. Kim, B.J., Hong, S.C., Egger, D., et al.: Predictive modeling and categorizing likelihoods of quarantine pest introduction of imported propagative commodities from different countries. Risk Anal. 39(6), 1382–1396 (2019) 6. Hunt, N., Carroll, A., Wilson, T.P.: Spatiotemporal analysis and predictive modeling of rabies in tennessee. J. Geogr. Inf. Syst. 10(1), 89–110 (2018) 7. Ma, S., Steger, D.G., Doolittle, P.E., et al.: Improved academic performance and student perceptions of learning through use of a cell phone-based personal response system. J. Food Sci. Educ. 17(1), 27–32 (2018) 8. Vieira, C., Vieira, I., Raposo, L.: Distance and academic performance in higher education. Spat. Econ. Anal. 13(1), 1–20 (2018) 9. Twilhaar, E.S., de Kieviet, J.F., Aarnoudse-Moens, C.S., et al.: Academic performance of children born preterm: a meta-analysis and meta-regression. Arch. Dis. Child. Fetal Neonatal. Ed. 103(4), F322–F330 (2018) 10. Booth, D.E., Ozgur, C.: The use of predictive modeling in the evaluation of technical acquisition performance using survival analysis. J. Data Sci. JDS 17(3), 504–512 (2019)
‘‘IoT Plus” and Intelligent Sports System Under the Background of Artificial Intelligence – Take Swimming as an Example Shuai Liu(&) Physical Culture Institute, Hunan University of Humanities, Science and Technology, Loudi 417000, Hunan, China [email protected]
Abstract. With the continuous development of technology of data and the ‘‘IoT plus’’ the deepening of the action plan, the Internet of things, artificial intelligence and other new technology application in the field of sports, sports were collected through intelligent equipment and mobile terminals, data volume geometric growth, unable to realize the connectivity, a lot of sports data presents the fast expansion of data island state. In order to better study the sports intelligence system under the background of Internet of things and artificial intelligence, this paper takes swimming as the research entry point. Firstly, a swimming posture measurement system is proposed for the daily training of swimmers combined with Internet of things and artificial intelligence technology. Then, this paper chose a professional coach with the system, with butterfly, backstroke, breaststroke and freestyle four commonly used strokes experimental verification, and analyses the actual measuring the movement of information processing, the experimental results show that the system could be used for swimming movement process, the stability of the monitoring accuracy as high as 92.12%. Keywords: Internet of Things Artificial intelligence system Swimming posture monitoring
Sports intelligent
1 Introduction With the rapid development of sensor technology, information and communication technology, especially the development of Internet of Things technology and artificial intelligence technology, many research units and enterprises at home and abroad have carried out many researches on sports wearables and have obtained rich research results and experience [1, 2]. This kind of device mainly monitors the movement information of human body (acceleration, velocity, position, etc.), conducts data analysis based on relevant theories, and gives specific feedback Suggestions [3, 4]. Istvan Soos explores the relationship between self-reported measures of emotional intelligence and pre-competitive emotional memory among athletes. In this study, a total of 284 participants completed an emotional intelligence self-report test and two pre-competition emotional tests, in order to observe the emotions these athletes experienced before their best performance [5]. An analysis of effort metabolism for © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 195–201, 2021. https://doi.org/10.1007/978-981-33-4572-0_28
196
S. Liu
football games shows that about 80% of effort through the game is aerobic. This is why assessment, training, and development of aerobic metabolism are so important. VeneraMihaela Cojocariu wanted to compare the two developing methods of aerobic metabolism. The results show that the development of flight capability and power using intensity equal to or higher than vV02max in training is more effective than the training using intensity between 75–85% of vV02max [6]. Dhar Vasant started a training, in this case, the players of the sensor is not only to readjust their actions, just as they do now, but also the use of their field of vision and communication might be passing options like athletes in training often use the machine to enhance the body, athletes are difficult to refuse to personal cognitive skills enhancement assistant [7]. For real-time monitoring of swimming exercise, this study proposes a wearable swimming posture measurement based on Internet of things the design scheme of can be swimming exercise without affecting the swimmer real-time access to swimming movement under the condition of information (including acceleration information and body rotation Angle) [8, 9], and upload it first place machine monitoring platform, the coach can be on PC platform view real-time monitoring data and data analysis, and give feedback on actual situation [10].
2 IOT Plus and Sports Intelligence System 2.1
IoT Plus
Internet of Things is a network based on the Internet, traditional telecommunications network and other information carrier, so that all ordinary physical objects can be independently addressed to achieve connectivity. It has three important characteristics: common object device, autonomous terminal interconnection and universal service intellectualization. Its definition is: through radio frequency identification (RFID), infrared sensors, global positioning system, laser scanner and other information sensing equipment, according to the agreed protocol, any article and Internet connection, information exchange and communication, to achieve intelligent identification, location, tracking, monitoring and management of a network concept. The concept of ‘‘Internet of Things’’ is a kind of network concept that extends and extends its clients to any goods and goods for information exchange and communication based on the concept of ‘‘Internet’’. 2.2
Sports Intelligent System
A swimming posture monitoring system is designed and developed in this paper. It is impossible to identify all the movements of the human body because of its infinite variety. In practice, the action recognition system is application-oriented. The system will specify several actions according to the specific requirements of the environment, assign meanings to the actions and then identify them. In this paper, we first verify the algorithm in the self-made human motion database. The database contains the species actions collected indoors and gives specific meanings. The smooth image g (x,y) detected by the system is then detected by Laplacian
‘‘IoT Plus’’ and Intelligent Sports System
197
operator edge detection, which can reduce the interference of part of the noise. The result obtained by convolving the original image with the Gaussian function and then computing the Laplacian differential of the convolution is equivalent to the result obtained by first computing the Laplacian differential of the Gaussian function and then computing the convolution of the original image, namely: f ðx; yÞ Gðx; yÞ ¼ f ðx; yÞ r2 Gðx; yÞ r2 Gðx; yÞ ¼
x2 þ y2 r2 ðx2 þ y2 Þ=2r2 e r4
ð1Þ ð2Þ
3 Design of the Experiment 3.1
Experimental Background
In the injury prevention and movement evaluation of professional athletes in daily training, the real-time monitoring of sports has become a hot research point at home and abroad. Among them, because the motion asymmetry in swimming can reflect many spinal diseases and sports injuries, swimming has been considered as a very important sport for rehabilitation therapy, and water therapy has also been considered as the main treatment in physical therapy. In the water exercise treatment and swimming, the buoyancy of water reduces the influence of human gravity, which has a good protective effect on the spine, knee, ankle and so on. Through the swimming posture monitoring system, the swimming posture can be corrected and evaluated. 3.2
Experimental Design
The sensor selected by the system is the MPU-6050 sensor module, which integrates the 16-bit three-axis MEMS accelerometer and three-axis MEMS gyroscope. Compared with the multi-component scheme, the inter-axis error in the combination of gyroscope and acceleration sensor is eliminated, and the data can be transmitted in the form of a single data stream through the IIC interface. The MPU6050 chip is also built with a data processing sub-module containing data filtering algorithm, which makes the measurement data output by the sensor highly accurate. The measuring range of the triaxial accelerometer can be configured as plus or minus 2/4/8/16g. The measuring range of the triaxial gyroscope can be configured as plus or minus 250/500/1000/2000dps (degrees/second). Because the signal bandwidth of swimming motion signal is less than 50Hz. In this design, the sampling frequency of 100Hz is used to measure the motion information. Some experimental results are shown in Table 1.
198
S. Liu Table 1. Experimental results Measuring range (dps) 250 500 1000 2000 Group1 0.221 0.214 0.435 0.325 Group2 0.125 0.243 0.156 0.552 Group3 0.324 0.531 0.143 0.314 Group4 0.187 0.218 0.322 0.121
4 Take Swimming as an Example of the Sports Intelligence System 4.1
Analysis of Sports Intelligence System in the Context of ‘‘IoT Plus’’ and Srtificial Intelligence
As shown in Fig. 1, through the comparison of the observed data in the simulated intelligent system, the swimming device modified by Internet of Things technology can realize a very rich automatic intelligent management function, and the upper computer issues control commands according to the program design to realize remote control. The PH value and ORP value of the swimming pool can be acquired according to the data uploaded by the sensor, and automatic remote control equipment can be set according to the result of attitude monitoring. Under the condition of wireless network coverage, the swimming attitude measurement device collects the swimmer's movement information in real time. The embedded Wi-Fi module sends the movement information data to the upper computer monitoring platform through the IEEE802.11 standard protocol, and the monitoring software displays the received movement information in real time for the instructor to view. According to the motion information of the swimming stroke, only the backstroke corresponding z-axis acceleration data is negative, which is exactly consistent with the swimming posture of the backstroke with the face up. Only the Y-axis gyro data corresponding to the backstroke and freestyle had a large amplitude, which was consistent with the characteristics of backstroke and freestyle that required left-right rotation of the body. As shown in Fig. 2, the X-axs acceleration amplitude and angular velocity frequency of both swimming styles increased with the increase of swimming intensity. According to the characteristics of backstroke and freestyle stroke, the movements on the left and right sides of human body are similar and alternate. At the same time, the X-axis direction of the measuring device is consistent with the central axis of the human body, and the motion information of Y-axis (especially the angular velocity information of Y-axis) is a simple periodic signal with strong regular characteristics, which represents the left and right angular velocity of the human body during swimming. By integrating it, the rotation Angle of the human body during swimming can be obtained. In this paper, the rotation Angle data of different strength and swimming strokes were extracted and analyzed. Only the X-axis gyroscope data and z-axis acceleration data corresponding to butterfly stroke and breaststroke have a large amplitude, which is consistent with the characteristic that butterfly stroke and breaststroke mainly rely on XZ plane motion. However, the waveform shapes of X-axis gyro
‘‘IoT Plus’’ and Intelligent Sports System
199
Fig. 1. Movement recognition rates of different swimming poses under the swimming posture monitoring system
data corresponding to butterfly stroke and breaststroke are obviously different, which is due to the different body movements of these two swimming styles. Therefore, human swimming style identification can be carried out directly according to the above motion information characteristics.
Fig. 2. Change in rotation Angle in the medium intensity backstroke and freestyle stroke
In addition, based on the certain danger of swimming, this paper also puts forward a scheme to ensure the safety of swimmers by using the input level sensor. The input level sensor can be used to measure the water depth, which is small in size, high in
200
S. Liu
precision and easy to install. On the one hand, the water depth can be automatically controlled by the input level sensor combined with the pumping and discharging system. On the other hand can swim in the equipment such as caps on swimming trunks installation into type liquid level sensor, the sensor send depth signal timing, when the swimmer dived into a certain depth and achieve a certain time is up an alarm signal transmission system, such as setting up area of 2 m in depth to 1 min of time, the PC will send alarm signal, and alert safety personnel pool, timely rescue, it has a security officer at the scene situation also can yet be regarded as more insurance. The above scheme the disadvantage is not real-time positioning the swimmer orientation, can also be further in the architectural design of swimming pool drowning monitoring system based on Internet of things, RFID tag and surveillance camera images can be effectively combined with the drowning incident judgment, RFID tag to make bracelet style, the swimmer can be worn on the wrist, including RFID tag and liquid level sensor, pulse monitor. The pulse monitor is used to collect physiological information of the user, such as how fast the pulse beats, and the liquid level sensor can be used to measure the water depth. RFID tag has a unique identifier, which can be uniquely identifies a user, it can be stored, read and forward the pulse data into the water depth data, more major is that it is a member of the position detection network, position detection network includes multiple sensor nodes and the swimming pool to deploy multiple RFID reader, such as a sensor node with its three near the RFID reader can form a network. 4.2
Suggestions on ‘‘IoT Plus’’ and Sports Intelligence System Under the Background of Artificial Intelligence
The system can also describe the characteristics of a scheme by tagging. Labels can be manually labeled, and also can be extracted by describing the scheme or user feedback. Obtain user preference information and obtain user’s historical behaviors, such as completion degree, thumb up, rating, evaluation, collection, etc. These behaviors represent user preferences. According to these preferences, a user preference model can be established, and based on this model, it can be inferred whether users like a certain plan. The features of user preference are compared with those of the scheme, and the scheme with high matching degree is recommended to the user. Content-based recommendation is relatively simple and accurate. For schemes that are added to the system database, even without user ratings, they can be recommended to favorite users through their labels as features, while unpopular schemes can be recommended. However, content-based recommendation has the problem of cold startup for new users, and its novelty is poor. For new users, the general recommendation can be used to compensate; For novelty, it can be supplemented by recommendations based on collaborative filtering.
‘‘IoT Plus’’ and Intelligent Sports System
201
5 Conclusions Based on the ‘‘IoT plus’’ and artificial intelligence technology, this paper designs a swimming attitude measurement system based on Internet of things, can be in does not affect the swimming movement under the condition of real-time access to the acceleration of the swimmer or body rotation angular velocity information, and upload it first place machine monitoring software platform for data analysis, data analysis method proposed in this paper for the swimmer's physical condition and swimming action evaluation has certain reference value. The system can be applied to swimming rehabilitation treatment and evaluation of movement symmetry, fatigue degree and movement amplitude in swimming training, as well as movement specification in professional swimming teaching and training.
References 1. Ebadi, M., Tabe, H.: The study of relationship between bodily-kinesthetic intelligence and entrepreneurship of sports managers in Azarbayejan. Brain Cogn. 75(3), 211–216 (2015) 2. Cem, S.A.: Comparison of the physical education and sports school students multiple intelligence areas according to demographic features. Educ. Res. Rev. 11(19), 1823–1830 (2016) 3. Fister, I., Suganthan, P.N., Fister, I.: Computational intelligence in sports. Appl. Math. Comput. 262(C), 178–186 (2015) 4. Unaldi, G., Koc, M.C.: Analysis of the multiple intelligence fields of the school of physical education and sports students (Sample of Ukurova University School of Physical Education AND Sports). Int. J. Adv. Res. 4(10), 89–97 (2016) 5. Soós, I., Lane, A.M., Hamar, P.: What is the benefit of measuring emotional intelligence and mood states in sports and academic settings? Appl. Psychol. 14(3), 7–31 (2015) 6. Venera-Mihaela, C., Iulia, D.: Some aspects of the relationship between emotional intelligence and optimal sports performance in women’s volleyball. Gymnasium 13(1), 210– 218 (2017) 7. Dhar, V.: What is the role of artificial intelligence in sports? Big Data 5(3), 173–174 (2017) 8. Daliang, Z.: Sports competitive intelligence and its influence on China competitive sports. Open Cybern. Syst. J. 9(1), 2272–2278 (2015) 9. Campo, M., Laborde, S., Mosley, E.: Emotional intelligence training in team sports: the influence of a season long intervention program on trait emotional intelligence. J. Individ. Differ. 37(3), 152–158 (2016) 10. Galily, Y.: Artificial intelligence and sports journalism: is it a sweeping change? Technol. Soc. 54(AUG.), 47–51 (2018)
Application Research of Artificial Intelligence in Swimming Shuai Liu(&) Physical Culture Institute, Hunan University of Humanities, Science and Technology, Loudi 417000, Hunan, China [email protected]
Abstract. Artificial intelligence (AI) technology develops rapidly. At present, intelligent technology is still being explored in sports, especially swimming, and there exist some problems such as immature system technology and unclear application. Therefore, it is extremely important to find the combination of AI and swimming, and to optimize intelligent systems. This article about the application of AI to swim at the literature study, video analysis, comparative research and mathematical statistics and other research methods, data of AI in swimming, swimming training the perspectives of the implementation of the way, in order to promote it provides the theory basis for the application of AI in swimming, for AI in the sports training of wider application. At the same time, considering that the development of AI in China is at the stage of perceptual intelligence and machines cannot think and act independently, the research believes that it is necessary to further develop and improve the algorithms related to the collected data of AI. Keywords: Artificial intelligence Assessment and evaluation
Applied research Sport of swimming
1 Introduction In the age of informationization and sports power, AI is playing an increasingly important role in sports [1]. Accelerating the construction of sports power is an important measure to improve the national quality. As a subversive and innovative technology in the Internet era, AI has become an indispensable force to promote the vigorous development of sports undertakings in various countries [2, 3]. More attention has been paid to the research results of science and technology with AI as the core in sports, and the improvement of sports depends on the depth and breadth of the combination of sports and science to a large extent [4]. Among the researches of scholars, there are many researches on athlete training guidance system [5]. The athlete training guidance system USES AI to transform the knowledge of sports experts into the knowledge base that can be recognized by the computer, and then through certain program setting, the computer can simulate the unique thinking of human beings and complete the intelligent problems that only human beings can solve [6]. To comprehensive evaluation of the information of the movement of athletes, for the specific analysis and evaluation of the sport, to help athletes and coaches in the © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 202–207, 2021. https://doi.org/10.1007/978-981-33-4572-0_29
Application Research of Artificial Intelligence in Swimming
203
process of training and competition, the reasonable and scientific evaluation of the correctness of the motor movement, real time points out the deficiency of motor movements, improve the accuracy of the athlete's training and scientific, and improve the level of athletes training and competition [7]. The training guidance system for swimmers and the research on intelligent swimming equipment mainly based on swimming pools and life buoys [8]. These studies have provided a theoretical basis for the development of this study, but there are also some limitations in the research field, and there are few relevant literatures [9, 10].
2 Conceptual Interpretation and Theoretical Analysis 2.1
AI and Related Technologies
As a mathematical tool, rough set is a kind of knowledge dealing with inconsistency and uncertainty. Rough set theory has been widely used in process control, decision support and knowledge discovery. If attribute set A is composed of conditional attribute set C and decision attribute set D, and: C [ D ¼ A; C \ D ¼ u Then S is called decision table and denoted as: S ¼ ðU; C [ C Þ
2.2
Intelligent Motion Information Processing System
Sports biomechanics can help us obtain various parameters of human movement, but it is inseparable from the exploration of information science to make full use of the obtained data for analysis. Therefore, it is necessary to combine biomechanics and information science to interpret the biomechanical significance of the results and process the data with information processing technology. How to make full use of the AI platform of comprehensive test to observe the training of athletes is one of our most concerned issues. In swimming, the monitoring information can reproduce the whole process of the athlete completing the technical movement. Therefore, how to establish a training guidance system for athletes based on big data information is an important subject. The motion information intelligent processing system needs technical support such as data preprocessing, feature extraction, classification decision and information fusion.
204
S. Liu
3 Research Objects and Methods Specifically, it analyzes the influence of the introduction of big data, swimming athlete training guidance system and intelligent swimming equipment on swimming sports and athletes' performance, as well as the feasibility of promotion and implementation in reality, and finally draws conclusions and gives Suggestions. The research methods mainly include literature review, video observation, comparative analysis, data statistics and expert interview. Specific operation through China Knowledge Network, Weipu and Wanfang database and other retrieval tools, to consult the relevant literature, to obtain the theoretical basis; By searching ‘‘intelligent swimming’’ related videos on the network platform for observation and analysis, the data are analyzed dialectically and uniformly, and the advantages and disadvantages in the application of AI are obtained. In addition, data of specific AI in swimming are collected through experiments, and relevant AI and swimming training experts are collated, analyzed, compared and interviewed to discuss some problems, opinions and methods in this direction and listen to their Suggestions.
4 Application Analysis of AI in Swimming 4.1
Application of AI in Swimming
The data analysis process of swimming based on AI is complex, but its application prospect is broad. It mainly relies on the analysis of athletes' movement habits to analyze and predict their ability, so as to make corresponding training plans and strategies. In the actual selection of some swimmers, big data talent selection monitored by AI is very important to the team. Therefore, before the athletes participate in the selection, they have to undergo a series of physical tests, including jumping, sprint bursts, height and other comprehensive data. Combined with data on athletes' performance in competition and training, the results predict a swimmer's ranking and future upper limit. However, the current forecasts are not entirely accurate. 4.2
Intelligent Outdoor Swimming Ring for Children
The application of AI in children's swimming is represented by the intelligent outdoor children's swimming ring with alarm function. The swimming ring has a number of advantages to ensure the safety of children, including: multi-balloon structure; And mobile phone wireless connection, so that parents can real-time observation of the location of the swimming ring, pressure and other information on the phone; Automatic alarm and flash, at the same time dial the parent's mobile phone number to make the parent's mobile phone vibrate alarm. The structure of the functional swimming ring system includes: swimming ring body, sensing module, positioning device, power module, light alarm module, control module, wireless transmission module, display
Application Research of Artificial Intelligence in Swimming
205
adjustment module, waterproof switch and waterproof USB interface, etc. Taking the filtering algorithm of a single sensor as an example, the basic algorithm is as follows: Fntn ¼ xðtnkÞ FnðtnkÞ þ . . .. . . þ xtn Fntn 1 xðtnkÞ ¼ k þ 1 ðtnðkÞ þ CÞ lnðkÞ þ C ¼ 1 þ
1 1 þ . . .. . . þ 2 k
Fntn represents the pressure value received by the nth sensor at the time point tn, and is the filtered data. n 2 {1,…, N} represents the sensor. tn 2 {1,… TN} represents the time series at the time of collection. xðtn - kÞ represents the existing time series tn ðtnkÞ sensor stress weight. Ln (k) + C represents the before the kth collection, the nth Fn sum of Euler's formula, where C is Euler's constant. 4.3
Swimming Pool Intelligent Life-Saving Monitoring System Technology
The intelligent life-saving monitoring system has gained attention in many countries, with some even requiring mandatory installation of life-saving systems in swimming pools. Swimming pool intelligent life monitoring system through the specific location of the swimming pool install waterproof camera, collecting relevant images and data, such as, wireless alarm for drowning accidents in swimming pool to provide intelligent life-saving alarm and video, images and digital information for competition and training. The intelligent life-saving monitoring system consists of central control host, video capture card, waterproof camera, on-site touch monitor screen, wireless launch and alarm device, drowning accident video and storage system, a set of intelligent lifesaving software and auxiliary competition training software, paging vibration device, cable, embedded parts, etc. In terms of hardware, the system USES a high-performance computer system, completely waterproof design, combined with a variety of audio and video and vibration prompts, can give specific guidance in the event of danger. 4.4
Intelligent Identification and Detection of Swimmer’s Strength Information
Feature extraction and classifier are the key points of motion stage classification and recognition. As a new type of intelligent information processing system, artificial neural network (Ann) is widely used for its good nonlinear mapping ability, selflearning ability and fault tolerance. The traditional pattern classification method is very strict in the identification of sample attribution, that is, a sample belongs to either this category or that category. In this test, the fuzzy minimum-maximum neural network is adopted to realize the structure of fuzzy neural network for pattern classification. The results are shown in Table 1 and Table 2. In addition, the measurement results of brachial strength and foot strength were compared, as shown in Fig. 1.
206
S. Liu Table 1. Identification of swimmer’s arm strength information based on FNN
The run-up Pedal swing phase The sliding phase Transition stage Final exertion stage
Training sample number
Sample recognition rate
Number of test samples
Test recognition rate
30 30 30 30 30
89% 90% 86% 91% 82%
20 20 20 20 20
83.4% 77.6% 70% 76% 81%
Table 2. Identification of swimmer’s foot force based on FNN
The run-up Pedal swing phase The sliding phase Transition stage Final exertion stage
Training sample number
Sample recognition rate
Number of test samples
Test recognition rate
30 30 30 30 30
85% 92% 94% 88% 91%
20 20 20 20 20
84% 75.7% 76.3% 86% 89%
100% 90%
The percentage
80% 70%
83.40% 84%
77.60% 75.70%
70% 76.30%
76%86%
81% 89%
60% 50% 40%
Arm
30%
Feet
20% 10% 0% The run-up Pedal swing The sliding Transition Final exertion phase phase phase stage stage Testing phase
Fig. 1. Based on the comparison of information recognition results of swimmer’s foot strength and arm strength
Application Research of Artificial Intelligence in Swimming
207
5 Summary From the analysis, it can be seen that the AI analysis technology is becoming more and more mature, which has certain effects on improving the performance of professional competitions. However, swimming training and related equipment development is still in the research stage. Therefore, not only in the software and hardware architecture, the system should use advanced sensor sensing technology to collect and establish a complete and efficient data management platform, but also dig out the movement rules behind on the basis of the data collected by AI to help swimmers carry out daily training.
References 1. Reddy, R.: Implementation of new ways of AI in sports. Artif. Intell. 14(5), 5983–5997 (2020) 2. Galily, Y.: Artificial intelligence and sports journalism: IS it a sweeping change? Technol. Soc. 54(AUG.), 47–51 (2018) 3. Karimzadehfini, A., Mahdavinejad, R., Zolaktaf, V., et al.: Forecasting of rehabilitation treatment in sufferers from lateral displacement of patella using artificial intelligence. Sport Sci. Health 14(1), 37–45 (2018) 4. Peter, E., Daniel, S., Michael, A., et al.: Artificial intelligence: bayesian versus heuristic method for diagnostic decision support. Appl. Clin. Inf. 09(02), 432–439 (2018) 5. Patel, D., Shah, D., Shah, M.: The intertwine of brain and body:a quantitative analysis on how big data influences the system of sports. Ann. Data Sci. 7(1), 1–16 (2020) 6. Fialho, G., Manhes, A., Teixeira, J.P.: Predicting sports results with artificial intelligence–a proposal framework for soccer games. Procedia Comput. Sci. 164(1), 131–136 (2019) 7. Baboota, R., Kaur, H.: Predictive analysis and modelling football results using machine learning approach for English Premier League. Int. J. Forecast. 35(2), 741–755 (2018) 8. Herold, M., Goes, F., Nopp, S., et al.: Machine learning in men’s professional football: current applications and future directions for improving attacking play. Int. J. Sports Sci. Coach. 14(6), 798–817 (2019) 9. Liang, G., Lan, X., Wang, J., et al.: A limb-based graphical model for human pose estimation. IEEE Trans. Syst. Man Cybern. Syst. 48(7), 1080–1092 (2018) 10. Wang, S.: 49 Research on the application of artificial intelligence in sports meeting management system. Revista de la Facultad de Ingenieria 32(16), 344–350 (2017)
Image Denoising by Wavelet Transform Based on New Threshold Hua Zhu1 and Xiaomei Wang2(&) 1
College of Computer and Information Engineering, Zhixing College of Hubei University, Wuhan 430010, China 2 Basic Department, Army Logistic University of PLA, Chongqing 400030, China [email protected]
Abstract. On the basis of two classical hard and soft threshold processing methods, combined with the improved methods mentioned in the literature, a comprehensive threshold processing method is proposed. The threshold function can not only overcome the defects of discontinuity and constant deviation of the traditional threshold, but also adjust the error between the original wavelet coefficient and the thresholding coefficient by adjusting the parameters. Through the comparison of simulation experiments, it is found that the denoising effect of the new threshold function is significantly improved over the traditional threshold denoising effect in terms of visual effect, mean square error, peak signal-to-noise ratio, etc. Keywords: Wavelet analysis
Threshold Image denoising
1 Introduction With the Great advances of modern science and strength, the network is everywhere. People get a lot of information through cell phones, computers and so on. Pictures are a major tool for information dissemination. In the process of image transmission, conversion and reception, it will inevitably be polluted by various kinds of noise. However, for example, edge detection, feature extraction, pattern recognition and so on, usually need effective denoising algorithm to preprocess, so as to get a more real image. Therefore, how to remove the image noise is a topic worthy of study. It is the key to remove the noise without destroying the contrast, definition and texture information of the image while suppressing the noise. Using wavelet transform to remove the noise of the image is a common method. Wavelet analysis is a very popular frontier in the world in recent years. It is a breakthrough after Fourier analysis. It has brought brand-new ideas to many related fields and provided powerful tools. From an engineering point of view, Shortwave analysis is a Signal analysis and processing method and an helpful method of analyzing time frequency band after Fourier's Transformation. As a new multi-resolution analysis method, wavelet transform can analyze both time domain and frequency domain. It has a characteristics of Time frequency detail, localization and MultiRes, Therefore, it is especially suitable for processing unstable signals. Different from single wavelet, multi© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 208–213, 2021. https://doi.org/10.1007/978-981-33-4572-0_30
Image Denoising by Wavelet Transform Based on New Threshold
209
wavelet bases are generated by multi-wavelet generating functions, corresponding to multi-scale functions. Single wavelets can directly decompose and reconstruct the sampled data when processing the signal and signal, but multi wavelets can not. They have to do Data pre-processing before decomposition, then decompose the processed data and reconstruct the reconstructed data, the reconstructed data can be obtained only by line post-processing. Multiwavelets have more degrees of freedom when constructed, so they have shorter branches, shorter sets, and more vanishing moments than single wavelets, and they can satisfy both orthogonality and symmetry. Multi-wavelet not only keeps many advantages of single wavelet, and overcame the defects of a single small wave. In practice, the important smoothing, compactness and symmetry can be combined perfectly. Wavelet transform threshold hold-up method was first put forward by D.L.donoho in 1995. The method has been widely used in hard and soft threshold functions [9, 10], but it also has some shortcomings. Although the two threshold functions proposed in [1, 2] have improved the solid threshold and mild threshold functions to a certain extent, this effect is non particularly obvious, and reconstructed image has poor error control in details. This article offers a new enhanced threshold feature that combines the pros and cons of flexible and hard threshold functions [3, 4]. The expression of the threshold function is simple, and it is the extension of the existing threshold function. The system error between the threshold coefficient and the original coefficient is adjusted by adjusting the parameters. At the same time, it is also differentiable and easy to calculate.
2 Wavelet Threshold Denoising Principle The first method of reducing threshold noise is the visual reduction method [8] proposed by Donoghue. Gasoline is to convert the noise signal into a corresponding small wave factor by small wave transformation. Set an appropriate threshold value for the signal, So when the threshold value exceeds the wavelet transform coefficient, we think that the signal is caused by noise and contains no information component, so we discard the signal; When the threshold value is less than the wavelet transform coefficient, we think that the signal is the result of the signal itself and noise, and keep the signal (solid threshold means) or Zoom to zero according to a certain Fixed value(mild threshold means).Finally, use small wave transformation to rebuild the new factor and get the signal after the noise. The main steps are as below: ① select the appropriate level N of wavelet decomposition , and make discrete wavelet transform for function f ðj; lÞ to obtain the wavelet coefficients w(j,l) ; ② set the threshold value of each decomposition layer, process the threshold value of each layer of wavelet coefficient, and calculate the ^ ðj; lÞ; ③ reconstruct the wavelet coefficient, calculate wavelet estimation coefficient w and reconstruct the image through the nth layer of low-frequency band coefficient ^ ðj; lÞ of wavelet decomposition and the high-frequency coefficient of each layer after w _
quantization, and obtain the denoised image f ðj; lÞ
210
H. Zhu and X. Wang
3 Wavelet Threshold Function 3.1
Commonly Used Wavelet Threshold Function
(1) solid threshold If the modulus of the small wavelet factor w is smaller than a designated threshold k, set up to 0, and if it is greater than the threshold k, it remains invariant, that is: wk ¼
w 0
jwj k jwj\k
ð1Þ
mild threshold If the modulus of the wavelet factor w is smaller than the designated threshold k, it is set to 0, and if it is greater than the threshold k, it is set to subtract the threshold, that is (Figs. 1 and 2): wk ¼
½signðwÞðjwj kÞ jwj k 0 jwj\k
Fig. 1. The hard thresholding
ð2Þ
Fig. 2. Thesoft thresholding
Although solid thresholding function and mild thresholding function are widely used in practice, they also have some defects: for example, when the hard thresholding function is used to process the image, the continuity of the estimated wavelet coefficients is poor, which may cause the reconstructed image to produce visual distortion such as oscillation and Gibbs effect; while the wavelet factor evaluated by the mild thresholding function are generally continuous However, when the wavelet coefficients are large, The constant discrepancy between the estimated small wave factor and the original small wave factor affects the approximation between the reconstructed image and the actual image, resulting in an inevitable error in image reconstruction.In addition, the derivative of traditional soft threshold function is discontinuous, but in practice, the higher derivative is often treated, so it has certain limitations [8–10].
Image Denoising by Wavelet Transform Based on New Threshold
3.2
211
Streamlined Wavelet Shrinkage Function
According to the advantages of the above threshold function and the methods mentioned in [1, 2], A new wavelet shrinkage has been advised to estimate small wave factor,whose functions are as follows: ^k ¼ w
8
.7). Both Cronbach’s alpha and composite reliability are for all constructs have scores higher than .70, demonstrating that the internal consistency is satisfactory. For the discriminant validity, we examine the cross loadings of indicators. The data of our study show that cross loadings are lower than the outer-loading. Therefore, the discriminant validity is satisfactory. We then compare correlations between latent variables with the square roots of Average Variance Extracted (AVEs). The square root of AVE of all the constructs are higher than their correlations with other constructs, showing that the discriminant validity is satisfactory. The PLS-SEM testing results for the model are shown in Fig. 2. This study signifies that eWOM diagnosticity has significant effects on eWOM adoption (b = .38, p < .001). The effect size f2 is .16, suggesting a medium effect. Thus, H1 is supported. In addition, our study finds that ease of use impacts adoption significantly (b = .27, p < .001). It has an effect size f2 of 0.09, very close to a medium effect. Thus, H2 is supported. The research further shows that ease of use impacts diagnosticity significantly (b = .14, p < .05). It has an effect size f2 of .02, a small effect. Therefore, H3 is supported. This research finds that the helpfulness indicators have significant effects on EWOM diagnosticity (b = .23, p < .001). The effect size f2 is .06, close to a medium effect. Thus, H4 is supported. Our results also reveal that the structured format have significant effects on ease of use (b = .38, p < .001). It has an effect size f2 of .16, a medium effect. Thus, H5 is supported. Our research shows that interaction of need for cognition with helpfulness indicators does not have significant effects on eWOM diagnosticity (b = .08, p > .05). Thus, H6 is not supported. Last, the results of our study demonstrate that interaction of need for cognition with ease of use does not have significant effects on eWOM diagnosticity (b = .10, p > .05). Therefore, H7 is not supported.
Fig. 2. PLS testing results for the research model
Effects of Online Product Review
219
5 Discussion and Conclusion The results of the study illustrate that eWOM characteristics impact consumer adoption decisions. Specifically, helpfulness indicators have significant effects on eWOM diagnosticity. Therefore, eWOM with helpfulness indicators increase consumers’ perception of EWOM diagnosticity. Our results reveals that the structured format has significant effects on ease of use. EWOM using a structured format enhances consumers’ perception of ease of use. Additionally, the results demonstrate that ease of use significantly influences diagnosticity. Therefore, higher level of ease of use results in higher perception of diagnosticity. Our research further shows no significant effects of the interaction of need for cognition with helpfulness indicators or ease of use on diagnosticity. Another finding is that eWOM diagnosticity has significant effects on eWOM adoption. EWOM with high diagnosticity increases consumers’ adoption of the eWOM. Last, our study shows that ease of use has significant positive effects on adoption. Therefore, higher ease of use results in higher rate of adoption. This research has three contributions to the eWOM literature. First, we develop a theoretical model on how eWOM characteristics influence consumers’ eWOM adoption, viewing consumers as both information users and system users. Second, we apply the information adoption model to the context of eWOM by using eWOM diagnosticity rather than information usefulness as the predictor of adoption. Third, our research applies the information systems success model to the eWOM context and find that ease of use, a major aspect of system quality, has significant effects on both adoption and diagnosticity. As to practical implications, ecommerce companies can learn how improve eWOM characteristics to influence consumer decisions.
References 1. Xu, X.: How do consumers in the sharing economy value sharing? evidence from online reviews. Decis. Supp. Syst. 128(1), 113–162 (2020) 2. Zhang, M., Wei, X., Zeng, D.D.: A matter of reevaluation: incentivizing users to contribute reviews in online platforms. Decis. Supp. Syst. 128(1), 113–158 (2020) 3. Hussain, S., et al.: Consumers’ online information adoption behavior: motives and antecedents of electronic word of mouth communications. Comput. Hum. Behav. 80(2), 22– 32 (2018) 4. Hussain, S., et al.: EWOM source credibility, perceived risk and food product customer’s information adoption. Comput. Hum. Behav. 66(1), 96–102 (2017) 5. Chong, A.Y.L., et al.: Analyzing key influences of tourists’ acceptance of online reviews in travel decisions. Internet Res. 28(3), 564–586 (2018) 6. Sun, Y., et al.: Bias effects, synergistic effects, and information contingency effects: developing and testing an extended information adoption model in social Q&A. J. Assoc. Inf. Sci. Technol. 70(12), 1368–1382 (2019) 7. Cui, Y., et al.: Understanding information system success model and valence framework in sellers’ acceptance of cross-border e-commerce: a sequential multi-method approach. Electron. Commer. Res. 19(4), 885–914 (2019)
220
L. Qu et al.
8. Chen, L., Aklikokou, A.K.: Determinants of e-government adoption: testing the mediating effects of perceived usefulness and perceived ease of use. Int. J. Publ. Adm. 43(10), 850–865 (2020) 9. Burke, P.F., Dowling, G., Wei, E.: The relative impact of corporate reputation on consumer choice: beyond a halo effect. J. Mark. Manag. 34(13/14), 1227–1257 (2018) 10. Pir Mohammadiani, R., Mohammadi, S., Malik, Z.: Understanding the relationship strengths in users’ activities, review helpfulness and influence. Comput. Hum. Behav. 75(2), 117–129 (2017) 11. Cyr, D., et al.: Using the elaboration likelihood model to examine online persuasion through website design. Inf. Manag. 55(7), 807–821 (2018)
The Empirical Analysis on Role of Smart City Development in Promoting Social and Economic Growth Wangsong Xie(&) Business School of Wuxi Taihu University, Wuxi 214064, Jingsu, China [email protected] Abstract. Based on Internet of things ‘‘wisdom city’’ based on the development of emerging information technology industry, such as operation system connected cities of all levels with wisdom, to achieve the production and the life convenient, the wisdom of the new urban development pattern, it can not only bring short-term economic growth, can also bring the progress of the society as a whole and economic all-round development. With the expansion of the construction of smart cities, it becomes very important to study whether the development of smart cities plays a driving role in economic growth. The purpose of this paper is to study the role of smart city development in promoting social and economic growth. This paper first expounds the smart city, and then, based on the empirical analysis of Cobb-Douglas production function, establishes an econometric model to promote economic growth by developing smart city, and obtains the output promoting elasticity index of smart city to economic growth by linear regression. The empirical results show that the construction of smart cities can indeed drive economic growth. This paper obtains that the sum of the coefficients of labor force, capital and smart city for the gross national product is greater than 1. Keywords: Smart city tesearch
Social economy Growth promotion Empirical
1 Introduction Since the United States initiated the construction of the Information superhighway in 1992, the information technology industry has been developing rapidly in the world. Urban development is gradually combined with science and technology, opening the path of informationized urban development [1, 2]. And emerging in recent years, the development of information technology, promote city informatization and intelligence to a higher level, ‘‘wisdom city’’ came into being, and caused widespread concern in the countries all over the world and follow, including the United States, Britain, Sweden, Japan, South Korea and other countries and regions, both local conditions to develop the wisdom city development planning and construction, and some have reached a higher level [3, 4]. The World Bank has published a set of forecast data on the promotion of economic growth by smart cities. If the smart application coverage © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 221–227, 2021. https://doi.org/10.1007/978-981-33-4572-0_32
222
W. Xie
rate of a million-sized city reaches over 75%, the economic output value of the city can be expanded by 3.5 times [5]. In recent years, China's economic growth has slowed down, and there is an urgent need for ‘‘structural adjustment and stable growth’’. As one of the key projects in China's 12th Five-Year Plan, the construction of smart cities is being carried out nationwide like hot tea [6, 7]. It can not only drive the development of relevant information technology industry, but also promote the transformation of economic development mode to the technology and innovation to a certain extent, so as to promote economic growth and benefit people's livelihood [8, 9]. Therefore, with the help of economic means and from a narrow and intuitive perspective, this paper USES empirical analysis to explore the promotion effect of developing smart cities on economic growth, which is of great theoretical and practical significance [10]. This paper first expounds the smart city, and then, based on the empirical analysis of Cobb-Douglas production function, establishes an econometric model to promote economic growth by developing smart city, and obtains the output promoting elasticity index of smart city to economic growth by linear regression. The empirical results show that the construction of smart cities can indeed drive economic growth and conform to China's national conditions. Through experiments, this paper obtains that the sum of the coefficients of labor force, capital and smart city for THE GROSS national product is greater than 1.
2 Smart Cities and Economic Growth 2.1
Smart City
Wisdom city is the Internet of things, cloud computing, sensor network, such as large data information science and technology as the foundation, the urban infrastructure, environment, population, economy, culture, social system, which joins a highly integrated, wisdom, coordination of urban management network system, perception, based on the improvement of economic benefit, the improvement of the residents' life style, the optimization of environmental resources, ecological sustainable, the stability of the evolution of spiritual civilization, social harmony for the purpose of city informatization, modernization of higher stage. Wisdom city can achieve comprehensive sensor, the sensor by laying wisdom, such as equipment, to realize the collection of various components of city, monitoring, statistics and analysis: wisdom city can realize integrated in full, based on the Internet of things the city network, map, make decisionmaking more scientific and efficient; Smart cities can stimulate innovation and provide a smart foundation for the whole society, so as to provide continuous incentives for higher-level science and technology and innovation.
The Empirical Analysis on Role of Smart City Development
2.2
223
Empirical Research on the Promotion Effect of Smart City Development on Economic Growth
Since there is no unified research model on the promoting effect of smart city construction on economic growth up to now, this paper USES The Cobb-Douglas production function as the basis of the model: Y ¼ AFðL; KÞ ¼ Ao elt La K b
ð1Þ
In the function, Y represents GDP of economic growth, L represents labor force, and K represents capital stock. Ao elt represents the level of technological progress;a,b,t are the output elasticity coefficients of labor factor, capital factor and technology factor respectively. Is the random disturbance term ( 1). Smart city is built on the basis of industrialization, information technology and other technologies. Technology is the basis for the development of smart city, and the development of smart city constantly drives the progress of technology. And in order to facilitate the demonstration of the research on the role of smart cities in promoting economic growth, this paper transforms formula (1): AðtÞ ¼ Ao elt ¼ Ao Sh
ð2Þ
S represents the construction and development level of a smart city. E represents the output elasticity of smart city construction; A0 represents other factors of technological progress other than the level of smart city development. Formula (2) is substituted into formula (1) to obtain a production function including smart city factor, namely the model used in this paper to study the promotion effect of smart city development on economic growth: Y ¼ Ao Sht La K b
ð3Þ
3 Experimental Design of Smart City to Promote Economic Growth 3.1
Data Acquisition
After obtaining the comprehensive index over the years of smart City development, according to the needs of the model and the principles and scope of data selection, the original data of other variables can be obtained from the statistical yearbook published by The Statistics Bureau of The People's Republic of China, as shown in Table 1: 3.2
Principles and Scope of Data Selection
The data of this paper mainly include the dependent variable – GROSS national product, independent variable – labor force, capital stock, and several major indicators of smart city. Since there is no specific statistics and measurement for smart cities in
224
W. Xie Table 1. Raw data Year 2017 2018 2019
GDP 473140 519470 5688445
Social employment Investment in fixed assets throughout society 76420 311485 76704 374694 76977 446294
China, it is difficult to select the data, and some choices must be made according to the corresponding principles. In the model, GDP Y, labor force L and capital stock K are respectively based on THE GDP, fixed asset investment and employed people published by the National Bureau of Statistics.
4 Analysis and Discussion of Experimental Results of Smart City Promoting Economic Growth 4.1
Analysis and Discussion of Experimental Results
From the results of regression analysis, it can be seen clearly that the labor force index, capital index and smart city composite index all pass the T-test with good significance, and the overall result also passes the F-test with good significance, and the goodness of fit (R2) is very ideal. Therefore, we can think that the model and the regression analysis results are valid. From the coefficient term of regression results, it can be seen that from 2009 to 2013, the variable of smart city is positively correlated with GDP, and the output promotion elasticity of smart city to GDP is 2.109, that is, the progress of 1 smart city can bring 2.109 units of economic growth. At the same time, labor variables are also positively correlated with GDP, and their output elasticity is 2.054: as the third independent variable, capital variables are negatively correlated with GDP. In addition, it can be seen from the empirical analysis results that the sum of the coefficients of labor force, capital and smart city for the GROSS national product is greater than 1, indicating that China's economic growth is still in the stage of increasing scale. With the continuous construction and development of smart cities, China's economic growth will show a better trend. The regression analysis results are shown in Table 2 and Fig. 1. 4.2
Policy Suggestions
(1) Formulate feasible top-level planning. Smart city planning is an innovative activity based on urban planning, urban status quo, and urban economic and social development planning. It is also a work that affects urban development. The planning of smart city needs to carry out in-depth research according to the historical form, geographical characteristics, current situation and the positioning of economic and social development of the city, so as to scientifically shape the smart positioning of the city. A smart city requires comprehensive planning, which
The Empirical Analysis on Role of Smart City Development
225
Table 2. Results of regression analysis Variables Coefficient T value T value significance test F value F value significance test R2 D-W value
L K S 2.054 −3.07 2.109 29.089 −29.935 24.672 0.007 0.021 0.026 67176.73 0.003 0.9888 2.652
Constant C 0 -0.328 0.798
3 2.5
test value
2 1.5 1 0.5 0 T value significance
F value R2 significance variable L K S Constant C
D-W value
Fig. 1. Regression analysis results
covers not only information and communication infrastructure, urban information application system and urban information industry development, but also synchronously planning the system, mechanism and regulations for the construction and operation of a smart city. (2) Focus on developing smart economy and industries. First of all, the government should strengthen the guiding role of the industry, and provide necessary support for the Internet of Things, cloud computing, LTE and other new-generation information technologies to play a reasonable guiding role. Second
226
W. Xie
highlighting the science and technology innovation main body status of enterprises, most of the current our country science and technology innovation from the colleges and universities, scientific research institutions, such as the scientific research achievements conversion is relatively difficult, enterprises are facing consumers face market as the first line, their innovation and technological progress to better promote the development of their own, promote the development of industry. Thirdly, the patent protection mechanism should be improved to safeguard the rights and interests of innovators. At present, patent infringement occurs frequently in China, and the newgeneration information technology industry is the disaster area of patent infringement. The protection of the rights and interests of innovators can help them make better innovations and promote industrial innovation. (3) Strengthening information infrastructure. It is necessary to establish a perfect and efficient big data processing center. The big data processing center is the brain of the whole city. Its degree of perfection and operation determines the efficiency of information processing and operation of a city. The big data processing center summarizes the data of the whole city, draws corresponding conclusions through intelligent calculation, and guides the coordinated operation of various systems to jointly complete the efficient operation of the whole city. Government websites and online offices are the trend of the development of smart government, which can greatly reduce the cost of management, convenient to meet the various administrative demands of citizens and enterprises. The laid of LTE base station, optical fiber, sensors such as backbone is the foundation of urban informatization wisdom is changed, the sensor will collect all kinds of information in the process of city operation, through the backbone of the neural network to data processing center, large data processing center processing opinion will communicate through backbone to the various systems, also between various systems through the backbone network communication and contact.
5 Conclusion All in all, the city supported by information, knowledge and intellectual resources, access to information by using a transparent, fully, broad, security of information transmission, effective and scientific information processing and balanced and effectively improve the efficiency of urban operation and management, improve the urban public service level, and improve our urban leapfrogging development of innovative, orderliness and persistent, formation of low carbon city ecosystem, build a new form of urban development. According to the national plan, China's urbanization level will reach around 50 to 52% by 2020. The planning, construction, management and public service system of these large, medium and small cities will certainly put forward extensive and urgent demands for the application of information technology, so as to improve the management quality and rapid adaptability of cities. All of these must depend on the application and support of the research results of urban intelligent engineering. It can be expected that information-based investment will provide us with a stable and huge industrial space. When the whirlwind of smart city spreads in China,
The Empirical Analysis on Role of Smart City Development
227
the smart life that people look forward to will inevitably bring new business opportunities in the digital era.
References 1. Yeh, H.: The effects of successful ICT-based smart city services: from citizens’ perspectives. Gov. Inf. Q. 34(3), 556–565 (2017) 2. Sholla, S., Naaz, R., Chishti, M.A.: Ethics aware object oriented smart city architecture. China Commun. 14(05), 160–173 (2017) 3. Nonko, E.: Smart city dreams. Metropolis 38(5), 106–112 (2019) 4. Dainow, B.: Smart city transcendent -understanding the smart city by transcending ontology. ORBIT J. 1(1), 1–5 (2017) 5. Zentz, K.: Planning the smart city. Public Utilities Fortnightly 157(3), 48–51 (2019) 6. Dandrea, J.: Smart city. Am. Cranes Transp. 13(8), 55 (2017) 7. Yuen, B.: Singapore: smart city, smart state. J. Southeast Asian Stud. 49(2), 349–351 (2018) 8. Dillon, N.: Intelligent smart city upgrades. Intertraffic World 2018, 172–173 (2018) 9. Macke, J., Casagrande, R.M., Sarate, J.A.R., et al.: Smart city and quality of life: citizens' perception in a Brazilian case study. J. Cleaner Prod. 182(MAY 1), 717–726 (2018) 10. Stone, M., Knapper, J., Evans, G., et al.: Information management in the smart city. Bottom Line 31(3–4), 234–249 (2018)
In-Situ Merge Sort Using Hand-Shaking Algorithm Jian Zhang1(&) and Rui Jin2 1
2
School of Information Engineering, Liaoning Institute of Science and Engineering, Liaonin, Jinzhou, China [email protected] Computer Teaching and Research Office, Jinzhou School of Modern Service, Liaonin, Jinzhou, China
Abstract. In the present computer system, the data processing aspect, occupies the enormous processing frequency, approximately has the nearly 50% above CPU processing time is USES in the sort data. It can be seen that the data sorting algorithm has a high requirement on its own execution speed, so it is particularly important to implement a fast and good sorting algorithm. In this paper. The traditional merge sort algorithm uses two-way merge sort, which needs the same size of auxiliary space and data to be sorted, so it is necessary to improve it. The traditional merge sort method and an improved in-situ merge algorithm based on hand method are introduced, which is devoted to providing theoretical basis for improving the traditional data sort method. Keywords: In-Situ
Merge sort Hand-Shaking
1 Introduction Wisdom city all need the support of big data in many fields, usually a population of 5 million Chinese city 25 years accumulative total of medical data probably will have 13 pb, other aspects such as school student’s data will be greater, and the city which is called wisdom is because of its management is wisdom, synergy, diverse, in the face of such huge data how to more effective management is to become the key to wisdom city data management [1]. In order to reduce the time and space complexity of traditional merge sorting algorithms, a new method based on hand-shake method is proposed in this paper.
2 Merge Sort Merge sort is a good example of divide-and-conquer, which first divides the sequence into several sub-sequences and sorts the sub-sequences, and finally merges and joins them into a new ordered sequence. This method needs to first ensure the ordering of subsequences and also the ordering of each subsequence segment. For example, the method of merging two ordered sequences into an ordered sequence is called ‘‘2-way © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 228–233, 2021. https://doi.org/10.1007/978-981-33-4572-0_33
In-Situ Merge Sort Using Hand-Shaking Algorithm
229
merge’’, whose time complexity is O (NlgN) and space complexity is O (N). It is a stable sorting method [2].
3 Traditional Merge Sort Implementation Method Merge sort the sequence of N elements: First, merge sort every two adjacent Numbers to form several ordered subsequences, so that floor (n/2) sequences can be formed; Secondly, the new sequence of Numbers is merged again to form floor (n/4) sequences, and so on until all sequence elements are sorted. The following example is a 2-way merge sort for a [8] = {49,38,65,97,76,13, 27,49}. Initial data sequence: [49] [38] [65] [97] [76] [13] [27] [49] After the first order: ½ 38 49 ½ 65 97 ½ 13 76 ½ 27 49 |fflfflfflffl{zfflfflfflffl} |fflfflfflffl{zfflfflfflffl} |fflfflfflffl{zfflfflfflffl} |fflfflfflffl{zfflfflfflffl} After the second sort: ½38 After the third sort: ½38 27
49 65 |fflfflfflffl{zfflfflfflffl}
97 ½13
27 76 |fflfflfflffl{zfflfflfflffl}
38 49 49 65 |fflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
49
76 97
Implementation process analysis and implementation algorithm: Step 1: The first step is to determine the size of the space to store data in two ordered sequences; Step 2: Use the pointer to determine the starting position of two ordered sequences; Step 3: Compare the size of the pointer to the element, put the smaller one into the space established in Step 1, and move the pointer backward one position; Step 4: Repeat step 3 Sino-German operation until the end of the sequence; Step 5: Copy the remaining elements to the end of the sequence in the space. Void MergeSort(int array[], int first, int last) { Int mid = 0; If (first a[SEC]) {count + +; // If the second half of the record is small, count the number of consecutive decimal Numbers SEC + +; } Exchange (& a (fir); Count, FIR, SEC); Fir + = count; }}
232
J. Zhang and R. Jin
Void Exchange (int *a, int count, int FIR, int SEC) {int m= sec-FIR -count, exsize= sec-FIR; / / hand method If (count >, 4 && m >, 3) // find out the number of continuous small data count and the number of unmerged data m of the first sequence, and judge the hand sorting or ordinary in-place merging sorting {Reverse (a, m); // Rotation sequence 1 unmerged data Reverse (a+m, exsisize -m); // Rotate smaller sequential data Reverse (a, exsize); I'm going to rotate the middle vector of the whole thing } else Move (a, count, FIR, SEC); } Void reverse (int *a, int revsize) {int I =0, Rev =revsize-1, temp; While (I < Rev) {temp = a[I];A [I] = a [rev];A (rev] = temp; I + +; Rev --; } } Void move (int *a, int count, int FIR, int SEC) {int I, J, temp, n= sec-FIR; For (I = 0; I < count;I++) {temp = a [n - count + I); For (j = n - count + I; J > I;J -) A [j] = A [j-1]; A [j] = temp; } }
5 Improved Algorithm Analysis The space complexity of hand-in-place merge is only O(1). In the case of roughly ordered sequences, the time efficiency is much higher than that of the original merge sort algorithm, which is equivalent to the efficiency of ordinary in-place merge and the merge sort based on data exchange. However, when the second sub-sequence has more continuous data smaller than the first sub-sequence, the time efficiency is much higher than the ordinary in-situ merge and merge sort based on data block exchange. The best case of hand-in-place merge is as follows: if the sequence itself is sorted, only m comparisons need to be made without any movement; The worst case is that if two sequences are similar to (1, 3, 5, 7, 9) (2, 4, 6, 8, 10), there is no advantage. For the improved merge, the order becomes unstable due to its own rotation.
In-Situ Merge Sort Using Hand-Shaking Algorithm
233
Now, for the natural merge sort algorithm, the idea is simple, but there are a lot of details to consider when you implement it, like the number of times you want to merge is the number of groups divided by the lower bound of two. Then there is the control of R, each time within the v.ize () −1 range. The implementation may require constant testing to succeed [9, 10].
6 Conclusion Although the traditional merge sort algorithm is stable, it costs more in time and space complexity than the hand-operated in-situ merge algorithm proposed in this paper. In wisdom city high speed development today, big data emerging things, Internet + Internet of things through the Internet markets, data execution efficiency is particularly important, we believe that our algorithm in testing, improving, there will be a new breakthrough in the continuously, bring new vitality for big data industry, for wisdom, the development of the city.
References 1. Yu, Y.: The e-leaf. Improved the two-way parallelization sorting method by hand-shake method. J. Southwest Natl. Univ. Nat. Sci. Ed. 35(5), 1087–1090 (2019). (in Chinese) 2. Ma, J., Qin, Y.: An improved co-ranking algorithm. Bohai Univ. J. Nat. Sci. Ed. 30(2), 190– 192 (2018). (in Chinese) 3. Wang, B., Hu, W.: A new co-sorting algorithm. Comput. Knowl. Technol. 18(6), 49–50 (2018). (in Chinese) 4. Yang, H., Wang, X.: A new link-and-sort algorithm. Aero. Comput. Technol. 18(3), 100– 102 (2019). (in Chinese) 5. Wang, W., Qiu, C.: A new parallel parallel sorting algorithm. Comput. Eng. Appl. 17(2), 87–90 (2018). (in Chinese) 6. Qiu, C.: Parallel sorting algorithm and its implementation in PC clusters. Zhengzhou Univ. 10(6), 201–202 (2017). (in Chinese) 7. Lin, Y.: Parallel juicing algorithm on supercube structure. Hunan Univ. J. 16(5), 5–6 (2019). (in Chinese) 8. Chen, H., Chen, W., Qin, L.: Parallel juxtade sequencing algorithm on reconfigurable computing models with wide bus networks. Comput. Eng. Sci. 1(2), 100–101 (2020). (in Chinese) 9. Chen, H., Chen, H.: Fast parallel sorting algorithm on the RAPWBN computing model. Small Microcomput. Syst. 15(2), 108–109 (2019). (in Chinese) 10. Chen, H., Chen, W., Shen, J.: Implementation based on the parallel juscoced sorting of Valiant on the water-optical bus array. Comput. Eng. 32(1), 97–99 (2018). (in Chinese)
An Environmental Data Monitoring Technology Based on Internet of Things Yan Wang and Ke Song(&) Department of Electronic Engineering, Sichuan Aerospace Vocational College, Chengdu, China [email protected]
Abstract. As the third wave of information industry after the application of computer technology and Internet, Internet of things technology has been applied to many fields. At the same time, with the acceleration of industrialization process, it has brought environmental pollution and other problems restricting economic development, which has attracted people's attention and attention. It is urgent to monitor and protect the environment. The application of technology in environmental monitoring can not only effectively provide realtime monitoring of the environment, but also provide important data basis for environmental supervision and management through information sharing and auxiliary decision-making. Therefore, this paper first summarizes the relevant contents of environmental monitoring and the Internet of things, and analyzes the current application status of the Internet of things technology in environmental monitoring, then adopts fuzzy comprehensive evaluation algorithm in calculating the data obtained from the monitoring system of the Internet of Things, and finally obtain the parameters of the environment. Keywords: Internet of things Evaluation Judgment matrix
Environmental Data Fuzzy Comprehensive
1 Introduction Internet of things (IOT) technology appeared in the 1990s and was first proposed by MIT in the United States [1]. It mainly refers to the technology that all objects are connected with the network by sensors to realize the intelligent identification of objects [2, 3]. The era is changing, the application of the Internet of things is also gradually expanding, so the definition is more and more broad. Generally speaking, the Internet of things can be understood as a kind of transmission by the network, or the perception device for information processing, data calculation, and mining out the relevant information. Finally, it is a kind of connection between things and people or things and things, and the extension of information technology and network technology. Environmental monitoring has been carried out since the 1970s [4]. At present, it is an important means to evaluate the quality trend of the current environment. There is no unified definition for the concept of environmental monitoring [5]. With the development and change of industry, the research covers all aspects of monitoring from environmental pollution to environmental quality. Generally speaking, the process of © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 234–239, 2021. https://doi.org/10.1007/978-981-33-4572-0_34
An Environmental Data Monitoring Technology Based on Internet of Things
235
environmental monitoring is divided into three steps. First of all, it is necessary to conduct purposeful sample plot investigation, collect and treat the samples, and follow the emission rules of pollution sources. Secondly, the authenticity of the samples should be ensured, the collected samples should be tested and analyzed, and the obtained results should be analyzed and processed. Finally, the obtained data should be sorted out and evaluated according to relevant standards, and its comprehensive indexes are determined.
2 Application of Internet of Things Technology in Environmental Monitoring The continuous development of the Internet of things technology has been applied to all walks of life [6], such as the intelligent logistics of e-commerce, the smart home that can be used by the family, the medical detection used by the medical department, etc., but environmental monitoring is the earliest field involved in the Internet of things technology. The structure of the Internet of things is more complex and diverse. The architecture of the Internet of things mainly includes three levels. As the core layer of the Internet of things technology, the perception layer mainly connects the physical world and the information world by some hardware devices, such as radio frequency tag reader, two-dimensional code identification reader, camera and other devices. As the basis of the Internet of things, it is mainly responsible for the perception of the environment or the attributes of the material. The network layer is responsible on the perception layer, and the information is transfered to the application layer. The application layer is to summarize and transform the data, process the data through the sub layer of the support platform, and apply the relevant industries in the sub layer of the application service. It is mainly possible to monitor the changes of the atmosphere at a certain point through the Internet. The sensor can upload the monitored data to the sensing layer through the node [7], and the sensing layer will transfer it to the application layer, and finally the program will be updated when some toxic and harmful gases or excessive emissions of units are monitored, relevant environmental protection law enforcement agencies shall be informed in time to deal with and rectify them to ensure the air quality. The monitoring of water quality through the Internet of things technology mainly includes the monitoring of drinking water and the monitoring port of water pollution. Drinking water is the guarantee of people's life [8]. It mainly monitors through the installation of sensors and cameras in the water source, and reports the data of various indicators in the water quality It includes the monitoring of pH value, sulfur dioxide, iron and other elements in water quality. When drinking water is found to be polluted, an alarm will be given. In this way, the pollution information will be returned to the relevant sewage discharge unit and monitoring center, and the pollution accident can be handled timely to avoid major pollution incidents. In addition, the environmental monitoring also includes soil monitoring, electromagnetic radiation monitoring, forest vegetation protection, etc., which provides
236
Y. Wang and K. Song
reliable information support and intelligent security guarantee for environmental governance. According to the characteristics of the environmental monitoring system, the system architecture is carried out. The system mainly includes two subsystems, namely the field computer system and the monitoring platform. The field computer system is mainly run on the field computer, responsible for the data acquisition of field instruments, data processing and packaging operations, sending these data, and finally sending them to the monitoring platform. It mainly includes four modules: data acquisition, data transmission, parameter setting and control. The monitoring platform is running on the monitoring server [9], which is mainly responsible for receiving the data collected from the field computer, and then processing the operation, and displaying the processed results to the monitoring personnel.
3 Fuzzy Comprehensive Evaluation Method Decision issues are presenting all the time in actual production and daily life, and industrial practice is qualified with inheritance [10]. The judge function is too obscure because of multiple factors coupling, so solution cannot be conducted, that is, fuzziness. Therefore, it is necessary to conduct comprehensive evaluation in the rating of things. But there are often some fuzzy factors, which require the use of fuzzy mathematic method to evaluate this fuzzy relationship. The computing steps are as follows. Step1. Determination of evaluation set. X ¼ fx1 ; x2 ; . . .xn g is used to represent all possible evaluation, and n is the number. All evaluation makes up evaluation set, and each xi ; i ¼ ð1; 2; . . .; nÞ represents the possible component in evaluation vector that may occur. Step2. Determination of factor set. U ¼ fu1 ; u2 ; . . .un g is factor set, and each ui is the evaluation vector of inheritance associated with the results. Step3. Fuzzy evaluation of single factor. In factor set, there are i factors ui ð1; 2. . .; mÞ that need to be evaluated correspondingly, in ui ð1; 2. . .; mÞ evaluation set, xj membership degree of j elements is rij ðj ¼ 1; 2; . . .; nÞ, and the affection determination set of ui of No. i element is ri ¼ ðri1 ; ri2 ; . . .rin Þ. While conducting criteria layer judging, the evaluation set of m schematic layers under the evaluation parameters of each criterion layer make up the evaluation matrix. Step 4. Fuzzy comprehensive 2 evaluation. 3 2 3 r11 r12 r1n r1 6 r2 7 6 r21 r22 r2n 7 7 6 7 In judgment matrix Ri ¼ 6 4 5 ¼ 4 5, No. i row of Ri rm rm1 rm2 rmn reflects the impact of No. i factor on membership of evaluation object. No. j row of Ri represents the impact of all factors on the evaluation on the members of No. j evaluation set. Step 5. Establishment of weight set.
An Environmental Data Monitoring Technology Based on Internet of Things
237
Impact ui of different factors determines that the weight value is wi , constituting weight set W ¼ fW1 ; W2 ; . . .; Wi g. Analytic hierarchy process is applied to determine the weight of different factors with respect to different indexes. The weight of criterion layer is W ¼ fWi1 ; Wi2 ; . . .; Wim g, and the weight of schematic layer is wim . The constraint of weights of each layer is as follows. n X
wi ¼ 1; wi 0
ð1Þ
n¼1
Step 6. Determination of the weight of criterion layer. The target layer of factor set of the fuzzy comprehensive evaluation, criteria and properties evaluation ranking are determined as:X ¼ fx1 ; x2 ; x3 ; x4 ; x5 g ¼ f Good, relatively good, general, relatively poor, poor g. Step 7. Constructing judgment Matrix. A proper basic decision matrix is required to be established in each layer, the weight relationship of the indexes is determined and the judgment matrix of the relationship of the elements is established. By listing the degree of importance among the various elements on each layer, comparison matrix A ¼ ðaij Þnn is obtained and aij ¼ 1, that is, the comparison result n P of the element itself is 1. Let ri ¼ aij i = 1,2,3,…,n comparison matrix changes into j¼1
indirect judgment matrix through mathematics variation. The variation method is shown as follows. 8 ri rj ri rj 0 < rmax rmin ðbm 1Þ þ 1 . i dij ¼ 1 h rj ri ð2Þ : ðbm 1Þ þ 1 ri rj \0 rmax rmin
The indirect judgment matrix obtained has the following property
1= dij 1 dij \1 bm 1 dij bm dij 1
ð3Þ
That means that the value range of dij is the scale of 1 bm , so dij ¼ 1 dji
ð4Þ
The indirect matrix after the change still has the reciprocal nature of the symmetrical elements of matrix. When bm ¼ 9, that is 9 scale. Step 8. Calculating relative weight. The characteristic root of judgment matrix A solved in this paper is AW ¼ kmax W, W after normalization process is linked to A1 ; A2 ; :::; An , and the weights are re-ordered in cycle under Ck . The existence and uniqueness of kmax can be known from the equation, so W can be represented by positive component, and W also exists and is
238
Y. Wang and K. Song
unique. To test the commonality of judgment matrix, the index is represented by eigen value and mean value CI CI ¼
kmax n n1
ð5Þ
For matrix of Grade 1 to 10, the consistency indices of judgment matrix are shown in Table 1. Table 1. Consistency indices of judgment Matrix of grade 1 to 10 Order number 1 2 3 4 5 6 7 8 9 10 RI 0 0.22 0.47 0.83 1.06 1.18 1.29 1.37 1.42 1.55
Four ways to solve the credit evaluation weighting coefficient vector are shown in Table 2.
Table 2. Four ways to solve the credit evaluation weighting coefficient vector Level 1 Indicator Parameter1 Parameter2 Parameter3 Parameter4 Weight 0.132 0.512 0.188 0.130
Each factor is evaluated with the full mark of 1, and the scoring results of factors are shown in Table 3. Table 3. Survey results of factors membership Index content of criteria layer
Index content of schematic layer
Rating
Parameter1 Parameter2 Parameter3 Parameter4
Scoring results Good Relatively good 0.28 0.31 0.36 0.35 0.22 0.32 0.33 0.38
General 0.22 0.15 0.23 0.06
Relatively poor 0.16 0.12 0.14 0.09
Poor 0.02 0.02 0.08 0.03
4 Conclusions With the implementation of the major strategies, environmental monitoring based on the Internet of Things accelerates the development of modernization. Along with the maturity of the Internet of Things, how to conduct monitoring and control on the environment through the big data generated by the Internet of Things become a serious problem. So this paper proposed an environmental data monitoring technology based
An Environmental Data Monitoring Technology Based on Internet of Things
239
on Internet of Things, which makes the value of the data to be fully exploited, and providing a new method and idea for the researchers and government decision-makers.
References 1. Fang, S., Xu, L., Zhu, Y.: An integrated system for regional environmental monitoring and management based on internet of things. IEEE Trans. Ind. Inf. 10(2), 1596–1605 (2014) 2. Jiao, J., Ma, H., Qiao, Y.: Design of farm environmental monitoring system based on the internet of things. Adv. J. Food Sci. Technol. 6(3), 368–373 (2014) 3. Chen, L.W., Yang, J.H., Cao, X.H.: Indoor environmental monitoring system in the framework of internet of things. J. Univ. Electron. Sci. Technol. China 41(2), 265–268 (2012) 4. Shuli, Z.: Research on applications and technology of Environmental protection based on internet of things. Chin. J. Environ. Manag. 524(527), 371–375 (2012) 5. Shixian, Z., Rui, X.: Research and development of water quality online monitoring system based on Internet of things technology. Desalin. Water Treat. 122(8), 25–29 (2018) 6. Cao, J.H., Wang, L.L., Luo, H.X.: Research on key techniques for monitoring system of agricultural products transportation environment based on internet of things. Adv. Mater. Res. 588–589, 1086–1090 (2012) 7. Wei, Z., Tao, F., Rui, Z.: Research on applications and systems integration of smart Environmental monitoring based on internet of things. North. Environ. 12(3), 19–23 (2012) 8. Tang, C., Yang, N.: A monitoring and control system of agricultural environmental data based on the internet of things. J. Comput. Theor. Nanosci. 13(7), 4694–4698 (2016) 9. Zhang, K.S., Zhang, X.W., Zhou, Y.: Design of agricultural greenhouse environment monitoring system based on internet of things technology. Adv. Mater. Res. 791–793, 1651– 1655 (2013) 10. Xi-Jie, W.: Application research of ecological environment monitoring based on internet of things technology. Transd. Microsyst. Technol. 30(7), 149–152 (2011)
A Path Planning Method for Environmental Robot Based on Intelligent Algorithm Ke Song(&) Department of Electronic Engineering, Sichuan Aerospace Vocational College, Chengdu, China [email protected]
Abstract. The path planning technology of environmental robot is an important research content in the field of robot. In this paper, the global path planning based on environmental model is studied. According to the known environmental information, the intelligent algorithm is applied to plan an optimal path. This paper first introduces the path planning method based on the environment model, and summarizes the common modeling methods of the environment model. Then, according to the characteristics of each model, the grid method is adopted for the environment modeling method, and the grid map path search and generation algorithm considering the robot human model is given. The grid space is used to search the route, and the route range is divided into a series of standard grids. The whole grid area is regarded as a complete binary image, and each cell grid represents a pixel. The practice shows that the accuracy and efficiency of this research in robot path planning are better than other algorithms. Through in-depth study, it can provide reference for the reasonable optimization of robot path planning and design. Keywords: Path planning Environmental robot Intelligent algorithm Grid Map
1 Introduction Path planning is one of the important bases for robot to realize other tasks [1, 2]. The effect and efficiency of robot path planning are the key performance indicators of robot intelligence [3]. Good path planning will greatly improve the working efficiency of mobile robot and has very high social value. In recent years, the development of mobile robot is moving towards the direction of high autonomy [4, 5]. On the one hand, mobile robots with high autonomy can complete path planning in a static environment [6]. On the other hand, they can make real-time autonomous decisions on the next motion behavior in the dynamic unknown environment that appears in the static environment. Finally, it can realize the autonomous movement towards the target with a small cost. Therefore, for such a research field which is of great significance both in theory and reality, robot path planning technology naturally becomes one of the research hotspots in the field of mobile robot technology.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 240–246, 2021. https://doi.org/10.1007/978-981-33-4572-0_35
A Path Planning Method for Environmental Robot
241
2 Path Planning Method for Environmental Robot With the deep development of robot application field, the research of path planning technology has become one of the hot research technologies in robot field [7]. Path planning methods can be divided into three categories: the first is case-based learning. In this way, a priori knowledge collection method is needed to plan the robot's behavior in advance. The second one is based on environmental model. In this method, the robot models the known part of its motion environment according to its own position and posture information. For the unknown dynamic part of the environment, the robot can obtain the part of environmental information through the carried sensor, so as to establish a complete dynamic environment model. Path planning based on environmental model mainly involves three aspects [8]: environment representation, planning method and path execution. Environment expression concerns how to establish a reasonable, safe and efficient environment model. The planning method is concerned with which method to optimize the robot path in the established model environment. The path execution makes the robot walk according to the given path, and gives the path a small range of path correction with full consideration of the mechanical dynamics characteristics of the robot. The third is behavior based path planning method. This method proposes to decompose a large planning system task into several small, relatively simple and many agents, and finally integrate them to complete complex large-scale tasks. Good path planning reflects the efficiency and security of mobile robots, and path planning is of great significance for environmental robots. At present, there is no universal planning method that can adapt to various environments and systems.
3 Environment Modeling Method 3.1
Topological Method
As a way of map representation, topological graph is famous for its compactness and conciseness. The topological graph method transforms the path planning problem in the environment space of robot from high dimension to low dimension [9]. When the topology network is established, each node in the topology map represents a place, and the robot path planning can be completed according to the node data. This method has two advantages, that is, a small amount of modeling time and storage space is needed, and the planning time is fast. The topology map weakens the contour boundary of obstacles, and requires low positioning accuracy. 3.2
Visual Graph Method
The visual graph method uses the geometric modeling method to treat the obstacles in the environment as particles. The algorithm connects all the obstacle vertices and the three aspects of the robot's starting point and destination point by two lines, and then removes the lines that connect the lines passing through the obstacles in turn, and finally forms the straight-line link of barrier free path from the starting point to the
242
K. Song
obstacle vertex and then to the destination vertex. Finally, the optimal path can be formed in these lines by using the search algorithm. This method has limitations in obstacles with smooth edges. According to this problem, the tangent graph method uses tangent arcs to represent obstacles in the environment, thus solving the problem that smooth edges are difficult to find vertices. The problem is that the generated path is the shortest path attached to the obstacles. Voronoi diagram is a visual diagram method from the perspective of practical application. The path is as far away from obstacles as possible, which will lead to the path planning result is not optimal. 3.3
Neural Network Method
The energy function defined by neural network method makes the robot always move in the direction of less energy and finally reach the destination [10]. The disadvantage of this method is that the planning result is often feasible path and easy to fall into local convergence. Therefore, the neural network method introduces simulated annealing and other methods, which can avoid falling into the local extremum point to a certain extent, so as to find the global optimal solution. 3.4
Grid Method
The grid method is to discretize the environment into two-dimensional or threedimensional basic unit grid. The grid size determines the resolution of the discretized environment. The robot environment is modeled by marking these grids. In order to save storage space, we can use quadtree and other methods to model, or we can establish point by point scanning 2D environment from the perspective of convenient access The search algorithm is used to get the planning path. This method has been widely used because the discrete modeling idea is very consistent with the characteristics of computer storage and operation.
4 Path Intelligent Search Algorithm 4.1
Path Optimization Constraints
Different from the general path type optimization design problem, the constraint conditions of the path optimization design mainly consider the road line geometry constraint, and the combination of different path types is closely related to path safety. For path geometry, there are mainly five types of constraints. a. Plane minimum curve radius constraint. RHi Rmin 0;
ð1 i mÞ
ð1Þ
b. Maximum corner constraint. In order to avoid sudden changes in the plane line type, it is necessary to set the maximum corner constraint of the plane curve segment.
A Path Planning Method for Environmental Robot
243
½ðxi xi1 Þ ðxi þ 1 xi Þ þ ðyi yi1 Þ ðyi þ 1 yi Þ atacccs ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi amax 0; ð1 i mÞ ðyi yi1 Þ2 þ ðxi xi1 Þ2 þ ðyi þ 1 yi Þ2 þ ðxi þ 1 xi Þ2
ð2Þ c. Maximum slope constraint Hj Hj1 Gmax 0; Mj Mj1
ð 1 j nÞ
ð3Þ
d. The shortest clip straight line length constraint. The curve formed by the short straight line between two circular curves of the same direction is easy to produce illusion, and the shortest clip straight line length constraint needs to be set. Lt min
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi a a i1 i Ri tan 0 ðxi xi1 Þ2 þ ðyi yi1 Þ2 Ri1 tan 2 2
ð4Þ
e. Flat and vertical curve coupling constraints. In the design of path selection, the following flat and vertical curve coupling constraints should be considered. If the turn is urgent, it is forbidden to insert a flat curve with a large corner, such as a sharp turn, at the top of the convex vertical curve or at the bottom of the concave vertical curve. In addition, it is forbidden to combine a flat curve with a small turning radius with a steep slope. Path geometry constraints can be expressed as Gi (X, Y, RH ; L0 ; M, H, RV ; BS, BE, TS, TE) 0
ð5Þ
Therefore, the complete inspection path optimization model is as follows. minf (X, Y, Rv ; BS, BE, TS, TE) ¼ min
5 X
Bn
n¼1
ð6Þ
st: Gi (X, Y, Rv, BS, BE, TS, TE) 0
4.2
Search for Path Scheme
The grid space is used to patrol the path search, and the line selection range is divided into a series of standard grids. The entire grid area is regarded as a complete binary image, and each cell grid represents one pixel. Taking the path control point, such as known building, project equipment, etc. as the target point, the connection cost between the two grids, that is, the objective function value obtained above, is defined as the space distance between two pixels. The space distance value of each pixel
244
K. Song
generated by the distance transformation is the minimum cost of its necessary point, and the corresponding path is the optimal path scheme. The steps of algorithm process are as follows. Step 1. Initialize the distance values of all grid points. For any network point p, if it belongs to the target point set T, then D0p ¼ 0, otherwise it belongs to the non-target point set N, which is D0p ¼ 1. Step 2. Define a standard neighborhood template. The neighborhood template is the set UN formed by all the grids that the current grid attempts to connect during the distance map update process. The update process is to traverse UN to select an optimal grid connection. The standard neighborhood template USN is defined, which uses forward scanning and reverse scanning to traverse all grids with the current grid distance R. Let the grid width be w, and the row and column offset values DR andDC of the forward standard neighborhood template grid relative to the center grid are determined by the following formula. a ¼ wR n ¼ pa DR ¼ ½R sin ða iÞi ¼ 1; 2; . . .; n DC ¼ ½R cos ða iÞi ¼ 1; 2; . . .; n
ð7Þ
The inverse standard neighborhood template is symmetric with the forward standard neighborhood template, and only DR and DC should be negative. Step 3. Scan grids. If all the grids in the standard neighborhood template are traversed, the spatial distance of these grids is calculated. If the update condition is met, the distance to the target point space is updated. The standard neighborhood search is to find the optimal connection point to meet the various constraints to the target point in the standard neighborhood template of the current grid Gi,j. In the forward scanning, only the grid in the upper left corner is traversed. In the reverse scanning, only the grid in the lower right corner is traversed. It is assumed that the rank of any grid traversed is r,c. a. If the terrain property of Gr,c is not feasible to fill in the zero point, that is type = 1, skip the grid, otherwise turn to b. E b. If the dr;c ¼ 1 value of Gr,c is infinite, indicating that Gr,c cannot connect to the line target point, skip the grid, otherwise turn to c. c. Calculate the ground natural slope of the two points of Gi,j and Gr,c. If the slope is greater than the limit slope, it indicates that the two grid is not connectable, skip the grid, otherwise turn to d. E E d. Calculate the connection cost d of the two points of Gi,j and Gr,c. If dr;c þ d\di;j indicates to turn to e. from Gi;j ! Gr;c ! . . . ! dEr;c þ d \ Gi;j ! . . . ! dEi;j , otherwise, skip the grid.
A Path Planning Method for Environmental Robot
245
e. DT data update / assignment E E ¼ dr;c þd di;j
DREi;j ¼ r i E DCi;j ¼c¼j 0 DREr;c
¼ir
E0 DCr;c
¼jc
ð8Þ
E ¼ 1, it indicates that the connection of Gi,j to After traversing all the grids, if di;j the target point is not searched in the standard neighborhood, and the next grid is continuously scanned. Step 4. Repeat step 3 until the spatial distance values of all grids in the study area no longer change. The final optimal path is obtained according to the connection manner of the shortest path to each target point recorded by each non-target grid point.
5 Conclusions The space distance value of each pixel generated by distance transformation is the minimum cost of its necessary point, and the corresponding path planning of environmental robot is the optimal scheme. Through case analysis, the calculation method of intelligent algorithm is analyzed. The practice shows that the calculation accuracy and work efficiency of this research in engineering projects are better than other algorithms, which provides reference for the reasonable optimization of environmental robot design.
References 1. Zeng, N., Zhang, H., Chen, Y.: Path planning for intelligent robot based on switching local evolutionary PSO algorithm. Assembly Autom. 36(2), 120–126 (2016) 2. Lee, J., Kim, D.W.: An effective initialization method for genetic algorithm-based robot path planning using a directed acyclic graph. Inf. Sci. 3, 368–373 (2016) 3. Zhao, X.Z., Chang, H.X., Zeng, J.F., Gao, Y.B.: Path planning method for mobile robot based on particle swarm algorithm. Appl. Res. Comput. 24(3), 181–183 (2017) 4. Tan, X.D., Wang, X., Song, P.W.: A algorithm of path planning based on multiple mobile robots. Appl. Mech. Mater. 470, 621–624 (2014) 5. Huang, H.C.: FPGA-based parallel metaheuristic PSO algorithm and its application to global path planning for autonomous robot navigation. J. Intell. Rob. Syst. 76(3), 475–488 (2014) 6. Scienceengineering, C.O.I.: A method based genetic algorithm for path planning of a mobile robot. Microcomput. Inf. 24(17), 267–269 (2008) 7. Zhong, X., Tian, J., Hu, H.: Hybrid path planning based on safe A* algorithm and adaptive window approach for mobile robot in large-scale dynamic environment. J. Intell. Rob. Syst. 99(1), 65–77 (2020)
246
K. Song
8. Wang, Y., Cao, W.: A global path planning method for mobile robot based on a threedimensional-like map. Robotica 32(04), 611–624 (2014) 9. Al-Araji, A., Ahmed, A.K., Dagher, K.E.: A cognition path planning with a nonlinear controller design for wheeled mobile robot based on an intelligent algorithm. Univ. Baghdad Eng. J. 25(1), 64–83 (2019) 10. Louste, C., Liegeois, A.: Near optimal robust path planning for mobile robots: the viscous fluid method with friction. J. Intell. Rob. Syst. 27(1–2), 99–112 (2000)
Design of Fractal Art Design Image Based on One-Dimensional MFDMA Algorithm Chunhu Shi(&) Guangdong University of Science and Technology, Dongguan 523083, Guangdong, China [email protected]
Abstract. There are obvious artificial effects in the constructed art design images such as iterative hard thresholding and other traditional algorithms, especially when the sampling rate is low. These artificial effects not only seriously influence the visual effect of constructed art design images, but also influence the performance of subsequent processing. One-dimensional MFDMA algorithm for art design image in this paper takes the construction results of the traditional construction algorithm as input. One-dimensional MFDMA algorithm is used to extract the features under different scales to achieve the multi scale information mining and fusion of the constructed art design image. Most of the artificial effects have been removed by one-dimensional MFDMA algorithm for image recovered. The subjective visual effect of art design image is improved obviously, and the result is closer to the original art design image. Keywords: Fractal Art design image algorithm Construction results
One-dimensional MFDMA
1 Introduction It is very challenging to construct the art design image directly from the measured values. Although iterative hard threshold method can deal with art design patterns to a certain extent, there are still obvious differences in both subjective visual effect evaluation and objective parameter evaluation [1, 2]. The classification technology can be summarized as rotation, translation, stretching, and scale independent fractal characteristics. These characteristics make fractal and multifractal have been active applications in various disciplines in recent years [3, 4], and achieved good application results. Compared with the direct construction of the art design image from the observed value, it is easier to construct the art design image as the starting point. By comparing the constructed results of iterative hard thresholding and the art design image, it can be found that there are a lot of artificial effects in the construction results. At the same time, the loss of information is caused by the measurement process, and the loss of details in the constructed art design image is serious. Fractal theory is increasingly penetrating into the study of art design image regularity [5–7]. It is now becoming increasingly widespread application of the conception of fractal and multifractal, and essentially describes the complexity and self-similarity of objects. Fractal and multifractal are an natural results which not depend on self© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 247–251, 2021. https://doi.org/10.1007/978-981-33-4572-0_36
248
C. Shi
similarity of scale [8]. A single fractal dimension cannot fully describe the characteristics of signals. Some examples show that many images with different visual meanings have very similar fractal dimensions. In fact, fractal dimension can't distinguish single fractal set from multifractal set [9, 10]. In order to get a more detailed description of fractal, it is necessary to introduce multifractal theory to increase the parameters of describing different fractal subsets. In order to solve these problems, this paper uses the one-dimensional MFDMA algorithm as the basis to realize the efficient quality improvement of the constructed artistic design image. The main reason for choosing this method is that the onedimensional MFDMA algorithm has developed rapidly in recent years. Good results have been achieved in the areas of classification, recognition, detection, tracking, image and video restoration and so on.
2 One-Dimensional Mfdma Algorithm For a given time series xðtÞ; t ¼ 1; 2:::; N; the process of analyzing with MFDMA is as follows. Step 1. Build cumulative sum sequence. yðtÞ ¼
t X
xðiÞ; t ¼ 1; 2; :::; N
ð1Þ
i¼1
Step 2. Calculate cumulative sum sequence sliding average functions. yðtÞ ¼
ðn1Þð1hÞ 1 X yðt kÞ n k¼ðn1Þh
ð2Þ
Among them, hðh 2 ½0; 1Þ is the positional parameter, h = 0, 0.5 and 1 are usually discussed in three cases, and n is window size. Step 3. The residual sequence is computed by subtracting the moving average function. Step 4. The residual sequence is divided into Nn non-overlapping parts of length n. ½N=n 1
ð3Þ
ev ðiÞ ¼ eðk þ iÞ; i ¼ 1; 2; :::; n;
ð4Þ
So each part can be expressed as
where k ¼ ðv 1Þn and then calculate the fluctuation deviation for each part F 2v ðnÞ ¼
n 1X ev ðiÞ2 n i¼1
ð5Þ
Design of Fractal Art Design Image
249
Step 5. Calculate Q order wave function ( Fq ðnÞ ¼
Nn 1 X F q ðnÞ Nn v ¼ 1 v
)1=q ð6Þ
It can be seen from the formula (5) that when q takes different values, the disturbance of the fluctuation deviation Fv2 ðnÞ to Fq ðnÞ of different degrees is described. When n takes different values, if the following relationship exists F g ðnÞ nhðqÞ
ð7Þ
then h(q) is called the generalized Hurst index, which describes the persistence of volatility, that is, the scale of the variance interval. The relation and the fractal dimension is approximately h = 2−df, where h(2) is the Hurst exponent of onedimensional time series. When formula (7) taks the logarithm, formula (8) can be obtained logðFq ðnÞÞ hðqÞlogðnÞ
ð8Þ
In this way, the power law relation of formula (7) can be verified by the double logarithmic graph of Fq ðnÞ and n, and h(q) is obtained by least square estimation. In addition, when the h(q) is independent of q, the time series x(t) is a single fractal; when h(q) depends on q, then the time series x (t) is multifractal. The relation between h(q) and scaling function/mass index sðqÞ is satisfied sðqÞ ¼ qhðqÞ 1
ð9Þ
Scaling function or mass index sðqÞ can also be used to characterize multifractal. When sðqÞ is a nonlinear function, the time series x(t) is multifractal. On the contrary, the time series x(t) is a fractal. The multifractal spectrum f ðaÞ is the Legendre transform of the scaling function/mass index sðqÞ. a ¼ dsðqÞ=dq; f ðaÞ ¼ aq sðqÞ
ð10Þ
In formula (10), a is called the singular index. h(q) and sðqÞ describe multifractal from the whole point of view. The singular exponent Holder(a) describes multifractal from the local perspective, so a is also called the local Holder index.
3 Simulation Analysis Experimental environment is as follows. The test in this section is completed on the desktop of Intel i7 CPU and 32 GB memory, and the programming software is Matlab 2014b.
250
C. Shi
In order to comprehensively compare the visual effects of the reconstruction results of the traditional method and the algorithm proposed in this paper, the constructed art design images under different sampling rates are shown in Fig. 1.
Fig. 1. The construction results of the art design images
From Fig. 1, it is obvious that the construction results of the traditional construction algorithm have a large number of artificial effects, which greatly affect the visual effect of the image, especially when the sampling rate is low. The subjective visual effect of the image is improved obviously, and the result is closer to the original image. However, we can find that the effect of the proposed method is more significant for relatively smooth images from the results and the visual effect of the constructed image is better. Though the proposed algorithm can be used to suppress the artificial effect, many details still cannot be restored. In general, the proposed algorithm can significantly improve the subjective visual effect of the traditional construction algorithm. While suppressing the compression effect, it can keep the edge of the image and restore some details of the image.
4 Conclusion It is still difficult to completely extract all the information in art design image, but there is a lot of research work on the gray, edge, texture, area shape, high dimension singularity detection and direction information detection in art design image. The realization of the object in art design image must be highly integrated for use of the information contained in the image based on the needs of a certain area. How to use texture, gray, edge, direction and other information reasonably and effectively in image recognition remains to be further studied. For traditional images with complex background images, it is difficult to reflect the advantages of traditional methods. Therefore, the processing of these art design images requires a better description tool. If the multifractal feature can combine the shape and direction of the region, the image recognition rate can be further improved, which has certain reference significance.
Design of Fractal Art Design Image
251
References 1. Tang, J., Ziniu, Yu., Liu, L.: A delay coupling method to reduce the dynamical degradation of digital chaotic maps and its application for image encryption. Multimedia Tools Appl. 17(8), 381–394 (2019) 2. Yuan, X., Cai, Z.: An adaptive triangular partition algorithm for digital images. IEEE Trans. Multimedia 99(2), 1 (2018) 3. Morris, M., Spiller, N.: The shadowy thickening of space and time with chance: an interview with the quay brothers. Arch. Des. 2, 72–77 (2018) 4. Duffy, K.M.: Kentucky by design: the decorative arts and american culture (review). J. Am. Folklore 131(9), 99–108 (2018) 5. Arana, L.M.L.: Architecture between the panels: comics, cartoons and graphic narrative in the (new) neo avantarde. Arch. Des. 61(4), 108–113 (2019) 6. Poggenpohl, S.: Fire signs, a semiotic theory for graphic design. Visible Lang. 35(7), 51–52 (2017) 7. Lingard, H., Blismas, N., Harley, J.: Making the invisible visible Stimulating work health and safety-relevant thinking through the use of infographics in construction design. Eng. Constr. Arch. Manag. 22(1), 59–70 (2018) 8. Merkulova, V.A., Voronina, M.V., Tretyakova, Z.O.: Designing mountain drawings with the help of computer- aided design (CAD). IOP Conf. Ser. Mater. Sci. Eng. 451(11), 121–122 (2018) 9. Choi, S., Aizawa, K., Sebe, N.: FontMatcher: font image paring for harmonious digital graphic design. In: 23rd International Conference on Intelligent User Interfaces, vol. 26, no. 5, pp. 37–41 (2018) 10. Lei, M.: Research on professional design software for the creativity promotion of graphic design. Adv. Mater. Res. 926(12), 2849–2852 (2013)
Application of Data Mining Technology in Geological Exploration Engineering Anping Zhang(&) Power China Hubei Electric Engineering Co., LTD., Wuhan, China [email protected]
Abstract. With the rapid development of modern Internet technology and information technology, the new technology of surveying and mapping also incorporates the elements of modern science and technology in geological exploration engineering, which strengthens the scientificity of surveying and mapping, and realizes the application of data mining technology in geological exploration engineering with larger scope and domain. The application of new surveying and mapping technology using data mining improves the accuracy of the geological exploration engineering projects, enhances the quality, efficiency and safety of geological exploration engineering, and plays an important role in the smooth development of geological exploration engineering. This paper expounds the application of data mining technology in geological exploration engineering, and puts forward the GPS positioning method of new surveying and mapping technology. The simulation results show that this method effectively improves the surveying and mapping method and improves the accuracy of surveying and mapping data by data mining in geological exploration engineering. Keywords: Data mining GPS positioning
Geological exploration Surveying and mapping
1 Introduction The emergence of new geological exploration technology is the product of the continuous development of the times, and is the main manifestation of the evolution of the scientific force [1]. In the new period, the speed of the development of science and technology has improved significantly, especially in the Internet plus era. In addition, new geological exploration technology has also ushered in more development environment [2]. With the continuous development of global positioning system, new geological exploration technology can not only be applied to some advanced fields, but also can locate the actual ground work, and help relevant departments and staff to analyze the ground space and location [3]. The exploration program was improved in both hardware and software by the global positioning system, and the public can work for the work of geological exploration personnel residents, enterprises and the whole society to provide more quality and convenient services, which can become an important part of modern geological exploration engineering [4]. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 252–258, 2021. https://doi.org/10.1007/978-981-33-4572-0_37
Application of Data Mining Technology in Geological Exploration Engineering
253
With the deepening of scientific research, the new technology of surveying and mapping has provided convenient technology for geological exploration engineering [5]. By surveying and mapping new technology, mapping and improving details based on the complexity of geological work are of great significance for geological exploration work. This paper studies the application of new surveying and mapping technology in geological exploration engineering.
2 Application of New Geological Exploration Technology Based on Data Mining 2.1
Application of Geological Exploration in the Field of Image Data Mining
With the rapid development of data mining technology, image technology and digital technology are applied in geological exploration work in engineering projects [6]. Therefore, the application of geological exploration technology in this field has also increased significantly. In terms of map technology and imaging technology, traditional mapping technology usually uses GIS system to process map, digital or imaging process and the effect is not ideal, resulting in data lacking analysis results. In geological exploration work, the surveying and mapping units are lack of corresponding hardware, software and other technical facilities [7]. They usually rely on personnel strength and expand capital input to complete mapping work, which only can get relatively correct results. The emergence of modern surveying and mapping technology can make use of science and technology, save money and time, and deepen mapping work. Compared with GIS system, it has obvious advantages. Compared with the huge task of analysis of GIS system in surveying and mapping work, the corresponding maps can be organized and make accurate division of the map of the proportion situation through the new technology of Surveying and mapping under the imaging technology and digital technology. In addition, the depth of processing can be repaired according to the map of the image. In recent years, digital technology began to shift to tracking and scanning tools in the direction of development through digital technology for various types of new competition for resources integration and comparison, and data processing technology was more efficient to complete the work of geological exploration with real more obvious advantages. In daily terrain exploration and field work, it is also an important step for surveying work to collect data through this technology. 2.2
Application of New Geological Exploration in the Field of Photography Data Mining
The introduction of new surveying and mapping technology in photography is mainly to optimize the way of photography and enhance the effectiveness of obtaining basic information of surveying objects through modern data mining technology [8]. At present, the photography technology in surveying and mapping technology enters the photogrammetric mapping stage of images analysis by means of computer or video
254
A. Zhang
processing, and the geological exploration work is from outdoor to indoor using surveying and mapping work, which helps to enhance the work efficiency greatly. The process of urbanization is increasing, and the population density of cities is also expanding, which brings great difficulty to the measurement of photography outdoor. If the new technology can be measured by surveying and mapping, it can not only carry out a large range of project mapping, but also improve work efficiency and reduce the difficulty of work [9]. Mapping and updating urban maps can provide more information for urban builders, and the correctness of information is also fundamentally guaranteed. In addition, the limitations of the traditional measurement technology affect the effective development of the three dimensional industrial measurement [10]. Three dimensional measurement technologies have made rapid development in industrial computer technology and intelligent algorithm. Workers use electronic theodolite or photographic instruments to carry out surveying and mapping operations, and carry out big data analysis and measurement at the computer terminal. 2.3
Application of New Geological Exploration in the Other Field of Data Mining
The new mapping technology is used in some large water conservancy project with the global positioning and perfect 3D measurement technology. The target detection can be achieved through coordinate frame and image scanning technique to understand the specific situation of water conservancy project. At the same time, the quality can be monitored by the equipment and technology of mobile terminal. The application of surveying and mapping technology in water conservancy projects has an important role in promoting the quality and efficiency of the project. The monitoring of water quality through the Internet of things technology mainly includes the monitoring of drinking water and the monitoring port of water pollution. Drinking water is the guarantee of people’s life. It mainly monitors through the installation of sensors and cameras in the water source, and reports the data of various indicators in the water quality. It includes the monitoring of pH value, sulfur dioxide, iron and other elements in water quality. When drinking water is found to be polluted, an alarm will be given. In this way, the pollution information will be returned to the relevant sewage discharge unit and monitoring center and the pollution accident can be handled timely to avoid major pollution incidents. Along with the acceleration of urbanization, underground lines, waterways and other lines are facing the problem of aging and unbearable consumption of large urban residents. In this case, the laying of pipelines under the ground of the city, especially the construction of the drainage pipeline, is closely related to the new technology of surveying and mapping. Most cities are able to carry out urban investigation through digital mapping technology and photography testing technology. Surveying and mapping technology also plays an important role in the improvement of drainage pipes and urban rivers. The surveying and mapping technology can effectively avoid some lines, damage to the original pipeline in the land mining, which can effectively improve the use of underground pipeline efficiency. At the same time, the surveying and mapping technology can report the measurement information through the tracking
Application of Data Mining Technology in Geological Exploration Engineering
255
system. For some leakage, it can find the water point in time and solve the problem in time. Cadastral survey is a modern technology of land measurement. By surveying different land, it is able to understand the limit point of land. In view of the high precision and difficulty of cadastral measurement technology, the cadastral map can be obtained quickly and effectively by the global positioning system. The investigation of land construction projects rely on cadastral survey technology. At the same time, the accuracy and efficiency of cadastral detection can be improved by using the method of computer intelligent algorithm and land detection, which saves time and money, and reduces the difficulty of the work of surveying and mapping personnel security, realtime monitoring of the cadastral. Finally, the problems can be solved. Communication engineering is an important project in the development of twentyfirst Century. On the one hand, communication engineering takes the daily communication, communication and liaison work of residents. On the other hand, communication engineering plays a significant role in urban construction. Therefore, the construction target, time and line can be obtained according to the original painting line of communication engineering in accordance with the standardized management of the project construction. The measurement and comparison of the bias of construction survey focus on the shift in the existence of error size and reduce the probability communication error. In order to ensure the development of communication engineering, workers can track and measure the specific base points through GPS, and record and analyze the observation.
3 GPS Positioning Method Step 1. Set the perturbed positive number Dn and the average angular velocity of the satellite operation n n ¼ n0 þ Dn n0 ¼
pffiffiffiffiffiffiffiffi 3 GM =a2
ð1Þ ð2Þ
When the satellite clock error correction is done at observation time t0 , there are t ¼ t0 Dt
ð3Þ
Dt ¼ a0 þ a1 ðt t0e Þ þ a2 ðt t0e Þ2
ð4Þ
When the Dt correction of satellite clock difference is calculated, t can be 0 approximated to t . Step 2. The calculation of the near point angle Ms of the observation time. Ms ¼ M0 þ nðt t0e Þ
ð5Þ
256
A. Zhang
Calculation of near point angle Es is Es ¼ Ms þ es sin Es
ð6Þ
Calculation of true point angle is cos fs ¼ ðcos Es es Þ=ð1 es cos Es Þ
ð7Þ
pffiffiffiffiffiffiffiffiffiffiffiffiffi sin fs ¼ ð 1 e2 sin Es Þ=ð1 es cos Es Þ
ð8Þ
pffiffiffiffiffiffiffiffiffiffiffiffiffi fs ¼ arctanð 1 e2 sin Es Þ=ðxs Es eÞ
ð9Þ
Step 3. The angle of u0 and rise calculation of orbit perturbation correction. u0 ¼ x0 þ fs
ð10Þ
du ¼ cus sin 2u0 þ cuc cos 2u0 dr ¼ crs sin 2u0 þ crc cos 2u0 di ¼ cis sin 2u0 þ cic cos 2u0
ð11Þ
Perturbed correction term
By calculating the perturbation correction angle from u liters, satellite to the distance of r, the orbital inclination of i u ¼ u0 þ du r ¼ as ð1 es cos Es Þ þ dr i ¼ i0 þ di þ iðt t0e Þ
ð12Þ
Step 4. Coordinates in the plane coordinate system of satellite orbit x ¼ r cos u y ¼ r sin u
ð13Þ
The evaluation results can be obtained after dealing with the survey results, which are shown in Fig. 1 and Fig. 2 and Fig. 3.
Application of Data Mining Technology in Geological Exploration Engineering 120 x ideal x
100
80
x
60
40
20
0
-20
0
10
20
30
40
50 time/(s)
60
70
80
90
100
Fig. 1. The surveying and mapping data fitting 3.5 3 2.5
ex
2 1.5 1 0.5 0 -0.5
0
10
20
30
40
50 time/(s)
60
70
80
90
100
Fig. 2. The error curve of surveying and mapping data in X axis 0.05
0
ey
-0.05
-0.1
-0.15
-0.2
0
10
20
30
40
50 time(s)
60
70
80
90
100
Fig. 3. The error curve of surveying and mapping data in Y axis
257
258
A. Zhang
4 Conclusions To sum up, the proportion of new surveying and mapping technology applied in surveying and mapping engineering is constantly improving. It is an important condition for modern engineering projects to develop. Mapping technology has played an important role in geological exploration, field work, water conservancy construction and other project activities. Scientization as the direction of new technology of surveying and mapping is also an important principle that should be adhered to in the future development of new surveying and mapping technology.
References 1. Chiasserini, D., Biscetti, L.: Performance evaluation of an automated elisa system for Alzheimer’s disease detection in clinical routine. J. Alzheimers Dis. Jad 54(1), 55–66 (2016) 2. Sarbu, I., Sebarchievici, C.: Performance evaluation of radiator and radiant floor heating systems for an office room connected to a ground-coupled heat pump. Energies 9(4), 228– 236 (2016) 3. D’Amore, L., Mele, V.: Mathematical approach to the performance evaluation of matrix multiply algorithm. Parallel Process. Appl. Math. 76(18), 25–34 (2016) 4. Shuli, Z.: Research on applications and technology of Environmental protection based on internet of things. Chin. J. Environ. Manag. 524(527), 371–375 (2012) 5. Farsi, A., Achir, N.: Wlan planning: separate and joint optimization of both access point placement and channel assignment. Ann. Telecommun. 70(5–6), 1–12 (2018) 6. Cao, J.H., Wang, L.L., Luo, H.X.: Research on key techniques for monitoring system of agricultural products transportation environment based on internet of things. Adv. Mater. Res. 588–589, 1086–1090 (2012) 7. Saabith, A.L.S., Sundararajan, E.: Parallel implementation of Apriori algorithms on the Hadoop-Mapreduce platform - an evaluation of literature. J. Theor. Appl. Inf. Technol. 32(6), 154–163 (2017) 8. Tang, C., Yang, N.: A monitoring and control system of agricultural environmental data based on the internet of things. J. Comput. Theor. Nanosci. 13(7), 4694–4698 (2016) 9. Zhang, K.S., Zhang, X.W., Zhou, Y.: Design of agricultural greenhouse environment monitoring system based on internet of things technology. Adv. Mater. Res. 791–793, 1651– 1655 (2013) 10. Cho, Y., Burm, S., Choi, N.: Analysis of human papillomavirus using datamining - Apriori, decision tree, and support vector machine and its application field. J. Theor. Appl. Inf. Technol. 32(8), 55–67 (2016)
A Fast Filtering Method of Invalid Information in XML File Xijun Lin1(&), Shang Gao2, Zheheng Liang1, Liangliang Tang1, Yanwei Shang1, Zhipeng Feng1, and Gongfeng Zhu3 1
2
Electric Power Information Technology Co., Ltd., Guangzhou, China [email protected] Information Center of Guangdong Power Grid Co. Ltd., Guangzhou, China 3 Yunnan Yundian Tongfang Technology Co., Ltd., Kunming, China
Abstract. In practical application scenarios, XML files are analyzed and understood structurally according to the nested element tags, element attributes, element contents, etc. However, users are not clear about the content published based on XML in advance, so it is difficult to obtain the required content quickly and completely. Especially for large-scale XML files, the analysis time and iteration time are longer, and the time lost due to analysis errors is also longer. In this paper, a fast filtering method of invalid information in XML file is studied. Firstly, the method can establish an index for the XML file to be processed, and then query the case with the highest similarity with the index in the knowledge base, and match the case with the index with the highest similarity in the knowledge base. Then, the preprocess of the XML file is selected from the XML preprocessing process library according to the matching results. Finally, the results of the preliminary processing are transmitted to the receiver for further processing. The practical results show that this method can effectively reduce the length of XML file to be processed, and achieve the beneficial effect of rapid processing on the terminal, and greatly improve the efficiency. Keywords: XML file structure
Fast filtering method Invalid information Network
1 Introduction In Web technology, people need to use information carrier to transmit information in order to share public resources [1]. At present, the main information carrier is hypertext [2]. However, hypertext can only rely on the browser and cannot be used by other applications, and HTML has many limitations. As a new standard for sharing data on the Internet, XML is an extensible markup language. It is compatible with HTML and has unlimited application scope. It is a special expression of semi-structured data. XML has quickly occupied the commercial market with its high flexibility, expansibility and format. Most enterprises have adopted XML for data processing. Compared with the traditional HTML, the flexibility and expansibility of XML are very good [3]. It has been widely used and become a new standard of Internet data sharing.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 259–264, 2021. https://doi.org/10.1007/978-981-33-4572-0_38
260
X. Lin et al.
2 Problems in XML Language XML language is a structural markup language, which can store one to many data relations flexibly [4]. It has the advantages of self-description, scalability, flexibility and platform neutrality. It is widely used in data storage and exchange in software systems. XML has a unified standard syntax, and any XML document supported by any system and product has a unified format and syntax. In this way, XML has the characteristics of cross platform and cross system. In power industry related software applications, XML is the first choice for data exchange because XML uses elements and attributes to describe data [5–7]. In the process of data transmission, XML always retains the data structure such as parent or child relationship. Several applications can share and parse the same XML file without using the traditional string parsing or unwrapping process [8]. On the contrary, ordinary files do not describe each data segment, except in the header file, and do not retain the data relational structure. Because XML data can be accessed from the database with the same location as ordinary files or with element names, XML for data exchange can make the application more flexible. In practical application scenarios, XML files are analyzed and understood structurally according to the nested element tags, element attributes, element contents, etc. [9, 10]. However, users are not clear about the content published based on XML in advance, so it is difficult to obtain the required content quickly and completely. Especially for large-scale XML files, the analysis time and iteration time are longer, and the time lost due to analysis errors is also longer. Therefore, how to solve the above problems has become the focus of research in this field.
3 Fast Filtering Method of Invalid Information in XML File The purpose of this paper is to provide a fast filtering method for invalid information in XML files, which can effectively solve the problem of long analysis time for large-scale XML files. The purpose of this study is achieved through the following steps. Step1. Index the XML files to be processed. Step2. Query the case with the highest similarity with the index in the knowledge base. Step3. The case with the highest similarity to the index in the knowledge base is matched with the index. Step4. According to the matching results, the preprocessing process of the XML file is selected from the XML preprocessing process library. Step5. The XML file is preliminarily processed according to the selected preprocessing process; Step6. The preliminary processing results are transmitted to the receiver for further processing. If the process with high similarity cannot be matched in the knowledge base, it will be processed according to the unknown structure XML file structure, and the corresponding index and processing method will be imported into the preprocessing process library.
A Fast Filtering Method of Invalid Information in XML File
261
As a preferred method, the steps for indexing XML files are as follows. Step1. The tags in the whole XML file are counted and normalized; Step2. The tag sequence and the corresponding normalized frequency are used as the index information of the XML file and stored in the preprocessing process library; Step3. The preprocessing process library also contains the preprocessing methods of this kind of XML file. The matching criterion between the XML file to be processed and the case in the knowledge base is that the total number of occurrences of the same keyword is greater than any other case in the knowledge base. In the whole process of data transmission, all data are compressed and then transmitted. Compared with the existing technology, the beneficial effect of this study is to propose a multi round interactive semantic analysis method suitable for power industry software. The traditional XML file processing mode is changed from sender to receiver to sender to preprocessor and then to receiver, which can effectively reduce the length of XML file to be processed and achieve the beneficial effect of rapid processing on the terminal; and because of the transmission process, the traditional XML file processing mode is changed from the sender to the receiver The file data in is compressed, which can effectively reduce the requirements of throughput and bandwidth. The network structure designed in the whole method is terminal-server-central server. Terminal refers to the end user, usually including PC. Server refers to other servers that generate data exchange with the central server. The scenario of data exchange between terminals, between servers, between terminals and central servers, and between servers and central servers is from the sender to the preprocessor to the receiver. The general structure of the XML file to be processed can be judged in advance through the preprocessing process, and then the data can be processed by referring to the past experience in the knowledge base and the pattern accumulated in history.
262
X. Lin et al.
The XML file to be processed is as follows.
am
GDDW GDDW_WZXT 01908625.sw ExtMaterialsContract add 24c3d5be-d791-483f-a8ad-0dea5ef9248d
41 c8ff65d999a24d828a1c90230f3cb637 10kV Drop Out Fuse Purchase Contract 20083002 GDDW2320170401HY38677 0022017000889915
31 c8ff65d999a24d828a1c90230f3cb637 2017-12-13T10:16:40.458+08:00 2017-12-13T10:16:40.458+08:00 20136002
c8ff65d999a24d828a1c90230f3cb637 0:0:9:1 MOUMOU LIU 20033001 361189 20008010 0 20009001 20034001 20036004 361189 927D3E287014B058E0430A96F003B058
2018-02-19T00:00:00.000+08:00 927D3E287014B058E0430A96F003B058
A Fast Filtering Method of Invalid Information in XML File
263
87F58FAA516B801EE0430A961103801E 1D408B259C2C4F8380909CEB629D8B8D 90444E63098A10DCE0430A97B23310DC
2
5F1BF10436F5024CE0530A961A011D90 5.98290598 3 0022017000889915WZQD5F1BF10436F5024CE0530A961A011D90 QY0309 5F1BF10436F5024CE0530A961A011D90 2017 20130001 20132003
4 Results The index is processed for XML files (file fingerprints). Statistics are consistent with“”Key words is 240. The number of occurrences of PURCHASE_ITEM_ID, PURCHASEDATE, UNIQUE_PROJECT_CODE, PROJECT_NAME, TAXRATE, TAX, UNIT_PRICE_TAX are counted, such as 228, 229, 206, 255, 200, 342 and 341, respectively. The above-mentioned sequences were normalized to form fingerprint A: 0.95, 0.954166667, 0.858333333, 1.0625, 0.833333333, 1.425, and 1.420833333. Query the case with the highest similarity with the index in the knowledge base and the most similar case B is found. The standard of fingerprint matching is that the sum of the value deviations of A and B is smaller than that of other cases in the knowledge base. According to the matching results, the preliminary processing mode is selected. The preliminary processing is transmitted to the receiver for further processing.
264
X. Lin et al.
5 Conclusions This study can effectively reduce the length of XML files to be processed, and achieve the beneficial effect of rapid processing on the terminal; and because the file data in the transmission process is compressed, it can effectively reduce the requirements of throughput and bandwidth.
References 1. Goldberg, I.G., Allan, C., Burel, J.M.: The open microscopy environment (OME) data model and XML file: open tools for informatics and quantitative analysis in biological imaging. Genome Biol. 6(5), 47–69 (2015) 2. Bjork, A.: Wertheim: chimpanzee-plus example xml file. Oryx 47(47), 97–106 (2010) 3. Haibing, Y., Fuquan, B.: Implementation of the Correspondence between Relational Databases and XML File. J. China Soc. Sci. Tech. Inf. 22(3), 325–328 (2003) 4. Surhone, L.M., Tennoe, M.T.: Office Open XML file Formats. Betascript Publishing (2010) 5. Farsi, A., Achir, N.: Wlan planning: separate and joint optimization of both access point placement and channel assignment. Ann. Telecommun. 70(5–6), 1–12 (2018) 6. Hua, C., Liang, Z.: Memorizing study of XML file in relational database. Hlongjiang Electr. Power 16(7), 54–58 (2008) 7. Saabith, A.L.S., Sundararajan, E.: Parallel Implementation of Apriori algorithms on the Hadoop-Mapreduce platform - an evaluation of literature. J. Theor. Appl. Inf. Technol. 32 (6), 154–163 (2017) 8. Tang, C., Yang, N.: A monitoring and control system of agricultural environmental data based on the internet of things. J. Comput. Theor. Nanosci. 13(7), 4694–4698 (2016) 9. Kai, T.: Method of classification based on content and hierarchical structure for XML file. Comput. Eng. Appl. 43(3), 168–170 (2007) 10. Cho, Y., Burm, S., Choi, N.: Analysis of human papillomavirus using datamining - Apriori, decision tree, and support vector machine and its application field. J. Theor. Appl. Inf. Technol. 32(8), 55–67 (2016)
Evaluation System of Market Power Alert Level Based on SCP Algorithm Jinfeng Wang(&) and Shuangmei Guo Department of Economics and Management, Yunnan Technology and Business University, Kunming, Yunnan, China [email protected]
Abstract. In this paper, based on the SCP algorithm, the current market power alert system alert level is evaluated and studied. The article proposes to establish an evaluation index system for the entire market and network market power in accordance with the structure-behavior-performance (SCP) analysis framework in industrial economics, and then uses a fuzzy hierarchical comprehensive evaluation method to evaluate the market, each blocked area, and each major network market power The alert level is evaluated and monitored in real time, and the corresponding supervisory decision-making mechanism is made according to the evaluation results of the SCP alert level, which can effectively improve the efficiency of supervision. Finally, taking the market data of the actual network market as an example of the algorithm, it is verified that the market power alert and regulatory decision-making mechanism based on SCP and comprehensive evaluation proposed in the article can effectively identify the market power level of each link of the SCP in the market and the strategic quotation of the market at all levels. Behavior. The experimental research results show that in the future, it is necessary to further strengthen the research on higher sensitivity indicators and the identification of complex tacit collusion behaviors between network markets, so as to improve this system structure as soon as possible to ensure the effective competition and healthy development of China's network market. Keywords: SCP algorithm
Market power Alert level Alert system
1 Introduction An important part of the market power evaluation and regulatory decision-making mechanism is to evaluate the market power alert level. Whether the evaluation method is scientific or not directly affects whether the evaluation result is accurate and whether the correct regulatory decision can be made [1]. Therefore, the establishment of a hierarchical index system and the scientific organization and evaluation of multi-index and multi-level targets appear to be particularly important [2]. This article combines comprehensive evaluation with analytic hierarchy process, uses methods to establish the index hierarchy and determines the value of each level factor, and then uses the comprehensive evaluation to obtain the evaluation results of each level and the overall goal [3]. Therefore, even if the market power evaluation results of the entire network © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 265–271, 2021. https://doi.org/10.1007/978-981-33-4572-0_39
266
J. Wang and S. Guo
are moderate, there may be serious market power conditions in some congested areas. Therefore, it is necessary to assess the entire network and each congested area hierarchically, establish a market alert map, and obtain the results based on the evaluation. The alert level is different for different regions, and the healthy operation of each region and the entire market can be clearly understood from the market map [4]. The application of the cyber-physical social system in the energy system can be called the cyber-physical social system in the energy field. The traditional social behavior modeling assumes that the social agent follows the principle of complete rationality, and constructs a utility function that reflects the subjective preferences of the social agent, with the maximum utility function Optimize its decision-making behavior for the goal [5, 6]. The literature studies the optimal bidding strategy in the network market through utility function modeling, and obtains the Nash equilibrium solution that clears the network market. However, such processing often involves strong assumptions, which limits the engineering application of this type of modeling method [7]. The management and maintenance of the information network market power system operation and maintenance system mainly includes five aspects: system operation and maintenance, virus prevention, account checking, troubleshooting, and analysis and optimization [8]. Its role is concentrated in repairing loopholes, information encryption, and management and maintenance. The purpose of system operation and maintenance is to realize the smooth operation of the entire industry system and ensure the reliability of industry information network security. In the specific implementation process, system operation and maintenance can also be divided into software operation and maintenance and hardware operation and maintenance, and software security alert evaluation system operation and maintenance and technical operation and maintenance constitute the main content of software operation and maintenance [9, 10].
2 Algorithm Establishment and Optimization 2.1
SCP Evaluation Algorithm
Construct the market power evaluation factor domain and its subsets. According to the structure, the factor domain of the market power alert level evaluation is divided into subsets, and then compared with the index system, it is refined into sub-indices or subcategories. Compare the maximum eigenvalue k_max of the judgment matrix. The corresponding feature vector x ¼ ½x1 ; x2 . . .xk T , which is the weight vector of C_i. The algo zrithm formula is as follows: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi Qk k p j¼1 ij ffi xi ¼ P qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Qk k k i¼1 j¼1 pij
ð1Þ
In order to ensure the reasonableness of the weights and compare the consistency check of the judgment matrix, the algorithm formula can be listed as follows:
Evaluation System of Market Power Alert Level Based on SCP Algorithm
kmax k \0:1 cRI ðk 1Þ
267
ð2Þ
In the above formula,cRI is the mean value of the consistency index, and its value varies with the order of the comparison matrix. After the order is 3–7 hi, the value of cRI is 0.59 and 0.90.1.22 respectively. After passing the consistency check, the weight vector Ai and the comprehensive factors ui (S,C,P) can be determined, and the algorithm formula shown below can be obtained: Ai ¼ ½ai1 ; ai2 ; . . .; aik
Xk
a r¼1 ir
¼1
X3 A ¼ ½a1 ; a2 ; . . .; a3 r¼1 ar ¼ 1
ð3Þ ð4Þ
If a certain network market is at the same alert level in multiple periods, it is necessary to evaluate whether its market power has eased or increased. The same evaluation target needs to be compared longitudinally on the time axis, and the fuzzy vector must be single-valued. The method is to assign points to each alert level, then the fuzzy vector of the evaluation result can be single-valued as: Pm
k j¼1 bj cj k j¼1 bj
c ¼ Pm
ð5Þ
Follow the final score. The evaluation targets can be compared and sorted horizontally or vertically, which generally takes 1 or 2. 2.2
The Relationship Between Alert Level Request and Evaluation
Since the load information in the load control module has certain post-setting characteristics, that is, the load information used when the call arrives is different from the actual load of the current SCP, so it is necessary to leave enough processing capacity to meet other SCPs when overloaded. The use of SCP during this interval. In the case of overload: N X s X i
kik
k
M X s X j
cik ¼
k
N X s X j
ljk
ð6Þ
ljk
ð7Þ
k
Under light load conditions: N X s X i
k
kik ¼
M X s X j
k
cik
N X s X j
k
268
J. Wang and S. Guo
The most ideal allocation method is to allocate service call requests to the SCP with the lowest alert level for processing. However, the number of services cannot be used to measure the alert level of SCP in a multi-service environment. Costj ðRik Þ ¼ f Lij ; Csk cik ð1 pik Þ
ð8Þ
Uj ðRik Þ ¼ min Costj ðRik Þ
ð9Þ
j
X
mjk bk aj cj
ð10Þ
k
3 Modeling Method 3.1
Definition of Warning Ring Beacon Evaluation System
The assessment risk index of the alert level is defined as the probability that the node voltage (indicated by the standard value) exceeds its allowable value during the assessment operation of the alert level multiplied by the weighted value of the maximum consequence and the average consequence v brought by the risk level: Rv;t;i ¼ pv;t;i / Sv;t;y;max þ bv;t;i;av NðVt;i \Vmin jjVt;i [ Vmin Þ Nall ¼ ðVUP:t:i:max Vmax Þ þ Vmin Vdown;t;i;max Rv;t;i ¼
SV;T;I;MAX
SV;T;Iav ¼ ðVUP:t:i:av Vmax Þ Vmin Vdown;t;i;av
3.2
ð11Þ ð12Þ ð13Þ ð14Þ
Comprehensive Evaluation Indicators
The comprehensive risk indicator Rt is the weighted average of the voltage overrun risk indicator and the power flow overrun risk indicator: Rt ¼
1 X gRv;t;i þ hRp;t;i Node t¼Node gþh ¼ 1
ð15Þ ð16Þ
In the above formula: Node is the number of all nodes; Rt is the risk value of the comprehensive market power alert level at time t of the system; Rv;t;i and Rp;t;i are the comprehensive market power alert level respectively Risk and trend out-of-limit risk value; η and h are the weights of comprehensive market power alert level risk and trend
Evaluation System of Market Power Alert Level Based on SCP Algorithm
269
out-of-limit risk, which are determined by the dispatcher based on experience, gþh ¼ 1
4 Data Algorithm Evaluation Results and Research Result 4.1
Evaluation Test of SCP Multi-core Market Power Alert Evaluation System
After experimental testing, the peak power consumption of different core numbers at different frequencies is shown in Table 1 when running spec200_rate and steam. All that is turned on are running state concurrently. The peak power consumption has important reference significance for the heat dissipation design of the processor. Table 1. Power consumption data of spec200_rate peak 1.0 1.2 1.5 1.8 2.0
Hz Hz Hz Hz Hz
Core_1 2.335 2.611 3.535 4.763 5.217
Core_2 2.524 2.432 3.753 4.211 5.986
Core_4 3.356 3.621 4.264 5.242 6.725
Core_8 4.252 5.425 6.733 7.554 8.635
Core_16 5.644 6.655 7.834 8.972 9.425
Core_32 8.8119 13.8997 15.878 21.314 32.4524
Table 2. Steam peak power consumption data 1.0 1.2 1.5 1.8 2.0
Hz Hz Hz Hz Hz
Core_1 2.315 2.651 3.335 4.753 5.617
Core_2 2.534 2.452 3.723 4.261 5.976
Core_4 3.356 3.671 4464 5742 6.705
Core_8 4.262 5.461 6.757 7.254 8.432
Core_16 5.621 6475 7.534 8.812 9.4725
Core_32 9.7145 12.5312 14.8654 16.3747 34.2564
According to the data obtained in Table 1 and Table 2, when evaluating the power consumption of the multi-core market power alert evaluation system, the power consumption calculation method is mainly read through the VR chip, which is a chip that directly provides voltage and current to the CPU Therefore, it is a very effective and accurate way to directly read the current and voltage from the VR chip and then calculate the processor's multi-core market power alert evaluation system power consumption value. Compared with other software methods, this Do not need any complicated power consumption calculation model, can also minimize the error of power consumption value.
270
J. Wang and S. Guo
Fig. 1. Analysis of the results of the SCP algorithm multi-core market power alert level evaluation system
As shown in Fig. 1, since the final test results of the multi-core market power alert evaluation give the running time of each stage, the frequency change of the processor will cause performance changes, and ultimately will also affect the running time of the multi-core market power alert evaluation program. Therefore, when summarizing and comparing various parameters, the method of calculating the total energy consumption is uniformly adopted, that is, the product of real-time power consumption and total running time. This can also directly reflect the influence of DPA on the final result under different parameters. In the test, first run it at 3.5G frequency to calculate the total energy consumption, and then use this energy consumption as the benchmark. The abscissa in Fig. 1 is the power consumption value obtained by other adjustment methods divided by the reference power consumption value under the 3.5G frequency. 4.2
Develop Differentiated Strategies Based on the Alert Level
Insurance entities should consider each the development of insurance products in each city-level market is different, and differentiated market development strategies are formulated. It is necessary to increase the resource input of key insurance categories and increase the resource input of insurance categories in key target markets; develop differentiated channel development strategies based on the characteristics of the city-level market, thereby improving the efficiency of resource use, seizing the most favorable market position, and quickly Build a competitive advantage. The classification of city levels provides a basis for market segmentation, which helps insurance entities to formulate branch business development plans more refined, objective, reasonable, and targeted, and develop development goals based on city levels to carry out refined, Differentiated management appraisal, to avoid slowing down if you should run fast because you are afraid of being beaten up quickly. If you don't have the conditions to run, you are slowly overwhelmed by business goals. The market power alert level segmentation according to the city level can provide fine The frame of reference for management.
Evaluation System of Market Power Alert Level Based on SCP Algorithm
271
5 Conclusion The comprehensive evaluation method of market power alert level proposed in this article can not only evaluate the overall operating conditions of the network market, but also can be used to analyze various market power conditions such as the network, identify unreasonable quotation behaviors, and discover market violations in time phenomenon. Monitor market power in real time, monitor the changes of indicators at various levels at any time, and use different warning colors on the market monitoring map based on the results of the sub-regional alert evaluation, which can reflect the impact of system congestion on the top products in the network to exercise market power; Market operations and regulatory agencies have a clear and intuitive understanding of market operations, and promptly take corresponding regulatory decisions based on the results of the SCP-based alert level assessment.
References 1. Bailliu, J., Han, X., Kruger, M., et al.: Can media and text analytics provide insights into labour market conditions in China? Int. J. Forecast. 35(3), 1118–1130 (2019) 2. Zhao, L., Huang, W., Yang, C., et al.: Hedge fund leverage with stochastic market conditions. Int. Rev. Econ. Finan. 57(SEP.), 258–273 (2018) 3. Makarius, E.E., Stevens, C.E.: Drivers of collective human capital flow: the impact of reputation and labor market conditions. J. Manag. 45(3), 1145–1172 (2019) 4. Zhang, H., Zhu, X., Shi, J., et al.: Study on PWM rectifier without grid voltage sensor based on virtual flux delay compensation algorithm. IEEE Trans. Power Electron. 34(1), 849–862 (2019) 5. Ye, Z., Zhao, H., Zhang, K., et al.: Multi-view network representation learning algorithm research. Algorithms 12(3), 62 (2019) 6. Wang, Z.: Robot obstacle avoidance and navigation control algorithm research based on multi-sensor information fusion, pp. 351–354 (2018) 7. Shi, J., Wang, Y., Fan, S., et al.: An integrated environment and cost assessment method based on LCA and LCC for mechanical product manufacturing. Int. J. Life Cycle Assess. 24(1), 64–77 (2018) 8. Salim, M., Agami, R.T.: Sustainability of integrated energy systems: a performance-based resilience assessment methodology. Appl. Energy 228, 487–498 (2018) 9. Hotie, F., Gordijn, J.: Value-based process model design. Bus. Inf. Syst. Eng. 61(2), 163– 180 (2019) 10. Pati, R.K., Nandakumar, M.K., Ghobadian, A., et al.: Business model design–performance relationship under external and internal contingencies: evidence from SMEs in an emerging economy. Long Range Plan. 51(5), 750–769 (2018)
Application Research of Biochemistry in Life Science Based on Artificial Intelligence Shuna Ge1 and Yunrong Zhang2(&) 1
2
School of Nursing, Yunnan Technology and Business University, Kunming, Yunnan, China College of Food and Medicine and Big-Health, Yunnan Vocational and Technical College of Agricultural, Kunming, Yunnan, China [email protected]
Abstract. Biochemistry is an important basic subject in the natural sciences. It is the study of life phenomena and the laws of life activities. The conclusions of biochemistry are all based on experiments. Biochemistry helps cultivate and improve students' scientific literacy, and plays an important role in constructing a professional biochemistry knowledge system and improving students' scientific thinking ability. Modern technology provides new research methods for the application of biochemistry in life sciences. High-tech applications under the influence of the artificial intelligence environment can not only save manpower and resources, but also become more intelligent and environmentally friendly, so as to achieve data. Monitoring. The purpose of this article is to study the application of biochemistry in life sciences based on artificial intelligence. This article starts with artificial intelligence, reforms the traditional manual operation, and uses artificial intelligence as the basis to in-depth research on the application of biochemistry in life sciences. Experimental results show that artificial intelligence can greatly advance the work process and reduce the experimental failure rate. Keywords: Artificial intelligencey
Life sciences Data mining algorithms
1 Introduction At present, the teaching of biochemistry experiment courses mainly faces the following problems. And because of its complex and profound theoretical knowledge, it is difficult for students to understand and easy to understand and lose interest in learning [1]. Therefore, before teaching important knowledge points, teachers can combine some specific cases to let students master the knowledge points in advance, use network tools to consult literature, and preview in the form of questions. This is also a shortcoming of biochemistry teaching [2]. Biochemistry is research at the molecular level. The research object is not the ‘‘physical’’ knowledge content is vague, the metabolic reaction is complex and complex, and the knowledge system is professionally and rapidly updated [3]. In teaching, teachers can combine esoteric and abstract content with daily life. Students are familiar with interesting life scenes or specific scenes. The image content is specific, visualized, © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 272–278, 2021. https://doi.org/10.1007/978-981-33-4572-0_40
Application Research of Biochemistry in Life Science
273
and dynamic, and stimulates the motivation of students to explore and research. Cultivate students' desire to learn interest [4]. In this way, students have a fuller experience of the follow-up applied research of biochemistry in life sciences [5]. In the context of artificial intelligence, my country has also begun to introduce a number of related policies to support the development of intelligent technology [6, 7]. At the same time, related manufacturing companies have also enjoyed more preferential tax policies, allowing more manufacturing companies to have new development [8]. The number of intelligent manufacturing technology talents in my country has been increasing year by year, and the research results of intelligent manufacturing technology continue to emerge [9]. Therefore, this is also the conflict between modern biochemistry and traditional methods in life sciences. Our goal is to adapt a new method to improve research methods [10].
2 Algorithm Optimization 2.1
Data Mining Algorithm
Big data objects have the complexity of the data spatial distribution state. 2.1.1 Dynamic Neighborhood Radius The reachable distance of dynamic neighborhood radius adaptive density is defined as: RA ¼ R
Ai Ai þ 1
ð1Þ
In the above formula, R is the reachable distance of the initial density, and Ai and Ai + 1 respectively represent the density values of two cluster density attraction points determined successively. 2.1.2 Data Point Density It refers to the overall density of the data space, and also refers to the sum of the influence functions of all data points after modeling. The formula can be expressed as: densityðxi Þ ¼
n X
‘
dðxi þ xj Þ2 2r2
ð2Þ
j¼1
In the above formula, the Gaussian function on the right represents the influence of each data point on X; point; o is the density parameter, which determines the gradient of the density function. 2.1.3 Density Reachable Distance It refers to any certain data object x in the data cluster space, and the distance R between data, a circular area with certain data as the center and the data distance as the radius, corresponding to the reachable density distance field of the data object.
274
S. Ge and Y. Zhang
R ¼ coef RX meanðDÞ
ð3Þ
In the above formula, coefR refers to the adjustment coefficient of distance, and its coefficient value is greater than 0 and less than 1; mean(D) refers to the average distance of all data objects; D is the collection of data objects. 2.2
Design of Improved Algorithm
This method uses data from a single subspace to perform low-rank matrix restoration and completion tasks. However, LRR assumes that data is extracted from multiple lowdimensional joint subspaces, and these subspaces are restored by the lowest rank of X to indicate Z: min rank(Z) þ kkE k0 Z;E
s:t:X ¼ XZ þ E max f
n 1X d yi ; f ðpðxj ÞÞ n j¼1
ð4Þ
ð5Þ
Where k > 0 is used to balance the rank of Z and the sparsity of the error matrix E. This formula is NP-hard. Usually the kernel norm is used to estimate the rank Z, and the l1 or l2,1 norm is used to estimate ||E||0. It is worth noting that the low-rank matrix XZ of X can also be obtained by minimizing this formula. LRR can be regarded as a generalization of robust PCA. Z is a similarity matrix. 2.2.1 Combined Clustering Let X = {x1,x2,…,xn} be the set of n data points sampled from K clusters (C = {C1,C2, …,Ck}). The goal of combined clustering is to find the common partition most consistent with the input BPs, and assign X to the original K clusters. Generally, then calculate the number of times two data points appear in the same cluster to obtain the final common cluster p: Sðxp ; xq Þ ¼
r X d pi ðxp Þ; pi ðxq Þ
ð6Þ
i¼1 ðt þ 1Þ
8i Ei
¼
kQi k2 k2 =lt ; if kQi k2 [ k2 =lðtÞ kQi k2
ð7Þ
In the formula: if a = b, then xp, xq 2 X, and d(a,b) = 1; otherwise, it is 0. Obviously, S should be normalized by S = S/r. Spectral clustering is performed on the co-coordination matrix S, and the trace minimization form is obtained by formula (3):
Application Research of Biochemistry in Life Science
min trðH T Ls HÞ
275
ð8Þ
H
s:t:H T H ¼ I Can also be written as: min trðH T Lz HÞ þ k1 kJ k0 þ k2 kEk2;1
H;Z;E
s:t:H T H ¼ I; S ¼ SZ þ E; Z ¼ J
ð9Þ
Its augmented Lagrangian function form is: L ¼ tr ðH T LZ HÞ þ k1 kJ k0 þ k2 kEk2;1 þ hY1 ; S SZ Ei þ hY2 ; Z J i þ
l ðkS SZ Ek2F þ kZ J k2F Þ 2
ð10Þ
3 Modeling Method 3.1
Spark Mechanism
Sort the above-mentioned data total set J < 1,2,…,n > in a column manner, perform Fourier transform on the arranged data, and perform a coupling operation on the two data patterns obtained in the previous section to obtain Sequence: U1, U2,…,Un. Among them, the way of Spark mapping is: Z FHTðs; tÞ ¼
bðsÞ tan½
1 p sðx þ Þds M M
Z
1 p bðtÞ-t½ tðy þ Þdt M M
ð11Þ
Using random sequence transformation, the sequence U1, U2,…, Un obtained by formula (1) is randomly transformed to form a random transformation sequence F1, F2, …, Fn; 3) Using the RSO mechanism, the sequence obtained in step 2) The sequence of the structured obfuscation operation, combined with the sequence formed in step 1), the RSO operation. After all nodes in the entire network are over, the RSO confusion operation sequence H1, H2,…, Hn is obtained; 4) For the sequence obtained in step 3), the structure confusion RSO mechanism operation is performed again to obtain the final sequence U. 3.2
Privacy Protection Technology Based on Artificial Intelligence Q If the projection of the data set Td() on Qi is Qi(T), then the data Q set Td() satisfies k-anonymity under Q , if and only if the frequency of any data in Qi(T) is k Times. i Q Under the operator, data with the same Qi value form a Qi group or k-anonymous group. The public data set T' satisfies the entropy l-diversity, if and only if any anonymous group label Q in T' satisfies the formula:
276
S. Ge and Y. Zhang
X
pðQ; sÞ logðpðQ; sÞÞ [ logðlÞ
ð12Þ
s2S
The public data set T satisfies the recursive diversity, if and only if any anonymous group label Q in T'satisfies the formula: rl \cðrl þ rl þ 1 þ . . . þ rm Þ
ð13Þ
Among them, c is a constant given by the publisher of the data set, ri represents the number of SAs, and i is sorted according to frequency of occurrence. Recursive diversity can reduce the skew value of the frequency of different SA values for each anonymous group by adjusting the c value. 3.3
Algorithm Design for Life Sciences
It is known that if the random arrival curve of the input of the system S is satisfied and the service process obtained on the system is satisfied, then for, regardless of whether the arrival process is independent of the service process, the delay D(t) satisfies: PfDðtÞ [ hða þ x; bÞg fWgðxÞ
ð14Þ
hða þ x; bÞ ¼ supfinffs 0 : aðsÞ þ x bðs þ sÞgg
ð15Þ
s0
According to the above formula and the nature of the definition, the end-to-end delay boundary formula of M2M service flow in artificial intelligence environment is derived as follows: PfDðtÞ [ xg fW inf ½inffbðsÞ aðs xÞg s0
ð16Þ
4 Data Algorithm Evaluation Results and Research Result 4.1
Old-Fashioned Methods and the Application of Biochemistry Under Artificial Intelligence in Life Sciences
It can be seen from Fig. 1 that artificial intelligence has extremely powerful computing power and memory ability in terms of error level, which makes it much lower than the old method. This is the most gratifying aspect, because in high-tech research, especially in biochemistry and life sciences In this regard, Energy and money are wasted, and artificial intelligence solves this problem almost perfectly, which is the gospel of the majority of scientific researchers. It can be seen from Table 1 that the forward calculation algorithm and the backward calculation algorithm are basically the same, with strong automatic control and weak error analysis. Relatively speaking, the overall performance of the data mining algorithm is the best, with a faster analysis rate, automatic control and top error analysis.
Application Research of Biochemistry in Life Science
277
100% 80% 60% 40% 20% 0% TradiƟonal method
ArƟficial intelligence
Research speed
CorrelaƟon depth
Degree of error
AdaptaƟon
Future outlook
Overall saƟsfacƟon
Fig. 1. Research Institute's application effects on research before and after the era of artificial intelligence Table 1. Comparison of various data about various algorithms
Data mining algorithm Euclidean distance analysis algorithm Forward calculation algorithm Backward calculation algorithm
Analysis rate 96%
Error analysis 99%
Security protection 92%
Automatic control 98%
Selfoptimization 93%
94%
99%
91%
95%
92%
97%
92%
90%
97%
95%
97%
93%
94%
97%
95%
Although weak self-optimization may lead to some errors, it is still a minority, and the Euclidean distance analysis algorithm is more mediocre except for error analysis, and is not suitable for the application of biochemistry in life sciences, so after comparison, We believe that data mining algorithms are more suitable for application research of biochemistry in life sciences.
5 Result This article discusses and analyzes the pros and cons of the old-fashioned methods and artificial intelligence on the application of biochemistry in life sciences to demonstrate the prospects for the future and the expectations of the application prospects of artificial intelligence in biochemistry in life sciences. In the future, we should believe in artificial intelligence, which can greatly reduce our mistakes and increase the connection between the two disciplines and ultimately make the experiment successful and away from the dangerous experimental process. This article also analyzes the influence of
278
S. Ge and Y. Zhang
several algorithms on the application of biochemistry in life sciences, and finally we choose one of them that is more suitable for the research applications of biochemistry and life science as the recommended algorithm for this article. The future era is the era of artificial intelligence, so we should believe that artificial intelligence will add new luster to our future, and biochemistry and life sciences will also flourish.
References 1. Hutson, M.: Artificial intelligence faces reproducibility crisis. Science 359(6377), 725–726 (2018) 2. Liu, R., Yang, B., Zio, E., et al.: Artificial intelligence for fault diagnosis of rotating machinery: a review. Mech. Syst. Signal Process. 108(AUG.), 33–47 (2018) 3. Liu, J., Kong, X., Xia, F., et al.: Artificial intelligence in the 21st century. IEEE Access 6, 34403–34421 (2018) 4. Alessandro, G., Vicente, J.B.: Hydrogen sulfide biochemistry and interplay with other gaseous mediators in mammalian physiology. Oxidative Med. Cell. Longevity 2018, 1–31 (2018) 5. Singleton, C.L., Sauther, M.L., Cuozzo, F.P., et al.: Age-related changes in hematology and blood biochemistry values in endangered, wild ring-tailed lemurs ( lemur catta) at the Bezà Mahafaly special reserve, Madagascar. J. Zoo Wildl. Med. Off. Publ. Am. Assoc. Zoo Veterinarians 49(1), 30 (2018) 6. Berner, N., Reutter, K.R., Wolf, D.H.: Protein quality control of the endoplasmic reticulum and ubiquitin–proteasome-triggered degradation of aberrant proteins: yeast pioneers the path. Ann. Rev. Biochem. 87(1), 751–782 (2018) 7. Hilgartner, S.: Life sciences. Méd./Sci. 31, 24–26 (2018) 8. Edward, C.K.H.: The 2018 SLAS technology ten: translating life sciences innovation. SLAS Technol. Translating Life Sci. Innov. 23(1), 1–4 (2018) 9. Suganya, E., Vijayarani, S.: Analysis of road accidents in India using data mining classification algorithms. In: International Conference on Inventive Computing & Informatics, pp. 1122–1126 (2018) 10. Shousha, H.I., Awad, A.H., Omran, D.A., et al.: Data mining and machine learning algorithms using IL28B genotype and biochemical markers best predicted advanced liver fibrosis in chronic hepatitis C. Jpn J. Infect. Dis. 71(1), 51–57 (2018)
Non Time Domain Fault Detection Method for Distribution Network Jianwei Cao1, Ming Tang1, Zhihua Huang2, Ying Liu1, Ying Wang2(&), Tao Huang1, and Yanfang Zhou2 1
State Grid Huzhou Power Supply Company, Huzhou, China 2 China Ji Liang University, Hangzhou, China [email protected]
Abstract. The traditional over-current relay protection method can not be directly applied to the active distribution network with renewable energy generation. In this paper, a non time domain fault detection method based on voltage signal amplitude spectrum and marginal spectrum is proposed. The fault is set in the distribution network. The voltage data obtained from monitoring points are processed and analyzed to obtain test samples to verify the reliability. Keywords: Fault detection
PSCAD modeling Hilbert-Huang transform
1 Introduction As renewable energy applied to the distribution network in form of distributed generation, the traditional single source radiation distribution network will become a multi terminal power network. The traditional protection is to provide protection by detecting short-circuit current, relying on the over-current protection device to detect the main power grid fault. However, when the distribution network connected with renewable energy generation fails, its internal protection is fundamentally different from the traditional protection mode: bidirectional flow of power flow; in addition, the size of fault current of inverter distributed generation is limited. Therefore, new fault detection methods need to be developed. After renewable energy is connected to the distribution network in the form of distributed generation, the traditional single source radiation distribution network will become a multi terminal power network. The traditional protection is to provide protection by detecting short-circuit current, relying on the over-current protection device to detect the main power grid fault. However, when the distribution network connected with renewable energy generation fails, its internal protection is fundamentally different from the traditional protection mode: bidirectional flow of power flow; in addition, the size of fault current of inverter distributed generation is limited. Therefore, new fault detection methods need to be developed [1–3]. Most fault transient signals in power system are nonlinear and nonstationary signals. References [4–8] describe a new signal processing and analysis method, specifically for Hilbert Huang (HHT) transform for nonlinear and non-stationary signal processing and analysis. Reference [4] mentioned how to make small current © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 279–284, 2021. https://doi.org/10.1007/978-981-33-4572-0_41
280
J. Cao et al.
grounding fault line selection based on HHT, and reference [9] mentioned the application of HHT in power system. In this paper, a non time domain fault detection method based on voltage signal amplitude spectrum and marginal spectrum is proposed. The fault is set at any point in the distribution network. The voltage data obtained from the monitoring points are processed and analyzed, and the test samples are obtained. The reliability of the design method is verified.
2 Data Analysis Based on HHT HHT is a new time series signal analysis method put forward by Norden. E. Huang. It decomposes the signal into a set of intrinsic mode functions by empirical mode decomposition, and then analyzes the analytic signal by Hilbert transform. 2.1
Intrinsic Mode Function (IMF)
In time-frequency analysis, not all instantaneous frequencies of signals can have practical physical significance. Only when the signal is approximately a single component signal, that is, each time can correspond to a sole frequency, can the instantaneous frequency have practical physical significance. In order to find the instantaneous frequency with physical significance, Norden. E. Huang et al. addressed the intrinsic mode function. It is defined as a signal that must meet the following conditions [10]: (1) The sum of the number of local minima and local maxima ought to same with zero crossing points number or differ by 1 at most, that is, there must be a zero crossing point after an extreme point. (2) For a random point of this signal, the mean value which resoluted by the local maximum and minimum points should be zero, that is, the signal needs to be symmetric locally regarding the time axis. The characteristic of the intrinsic mode function is that it has the correct definition of instantaneous frequency. Then, Hilbert transform is used to acquire the instantaneous frequency and amplitude of per IMF. By this method, the “time frequency energy” distribution diagram of that constructed signal, that is, this Hilbert spectrum. There is a fine resolution in both frequency and time domain, and three-dimensional distribution can reflect the important features of signal more comprehensively. 2.2
Empirical Mode Decomposition (EMD)
By using EMD the signal can be broken into a series of intrinsic mode functions. Similar to a filter bank, a series of IMF is obtained after repeated screening, and the main frequency of IMF obtained in this way will become lower and lower, that is, the IMF main frequency obtained at the earliest time is the highest, and the IMF dominant frequency obtained finally is the lowest. The specific process is as follows:
Non Time Domain Fault Detection Method for Distribution Network
281
(1) All local minima and maxima of signal s(t) are found. By cubic spline, local minima and maxima are fitted into maximum envelope emax ðtÞ and minimum envelope emin ðtÞ respectively. (2) The average value of the minimum and maximum envelope is calculated to obtain the mean value of it m1 ðtÞ. m1 ðtÞ ¼ ðemax ðtÞ þ emin ðtÞÞ=2
ð1Þ
(3) By subtracting the signal s(t) from the mean value of envelope m1 ðtÞ, the first component h1 ðtÞ is obtained h1 ðtÞ ¼ sðtÞ m1 ðtÞ
ð2Þ
(4) Check whether the above IMF conditions are met. For this reason, Norden E. Huang proposed Cauchy type convergence criteria. That is to say, the standard deviation SD of the two decomposed signals h(t) before and after is determined as follows: sd ¼
n X
½jh1ðk1Þ ðtÞ h1k ðtÞj2 =h21ðk1Þ ðtÞ
ð3Þ
k¼1
It can be seen from formula (3) that the smaller the SD result is, the more obvious the IMF linear characteristics and stability features are, and the more IMF numbers are finally obtained. It is found that when the standard deviation is between 0.2 and 0.3, it can not only ensure the linear characteristics of IMF, but also ensure the stability of IMF. If the above requirements h1 ðtÞ are not met, return to step 1, which will be regarded as a new signal and go through the above steps again.
3 Case Analysis The parameters involved in the distribution network modeling model are as shown in Table 1.
Table 1. Parameters of the distribution network model G DG1 DG2 f Z1/Z2/Z3
10 kV/50MVA 0.4 kV/1.15MVA 0.4 kV/1.35MVA 50 Hz 1X
V l2 l3 l4 /
500 100 200 500 /
m m m m
A distribution network model with distributed renewable energy generation as shown in Fig. 1. is constructed.
282
J. Cao et al.
Fig. 1. Distribution network modeling model
This paper studies four kinds of faults: single phase to ground short-circuit, two phases to ground short-circuit, two phases short-circuit and three phases short-circuit. In Fig. 2, x-axis is the time. During the set 2S operation time, due to the large amount of data, the time domain is processed in MATLAB program, and T = [0, 2] is equivalent to T1 = [0.0025, 0.025], the fault application time t = 0.5 s becomes T1 = 0.0056 s, the fault removal time t = 1.5 s becomes T1 = 0.019 s; the y-axis is the frequency (Hz); the z-axis is the amplitude (V).
a)Non-fault
b)Phase A to ground fault Fig. 2. Hilbert spectrum
In Fig. 2, the time spectrum shows that the amplitude changes with time and instantaneous frequency. Fig. a) under normal no fault condition, the amplitude is stable for a short time after starting operation, and the amplitude does not change with frequency, and maintains the peak value at the standard frequency of 50 Hz, that is, the energy is gathered at 50 Hz; figure b) under the condition of phase a ground short
Non Time Domain Fault Detection Method for Distribution Network
283
circuit fault, the amplitude changes with frequency within the specified operation time In the time domain of T1 = [0.01, 0.02], i.e. the time of fault occurrence, there is energy accumulation in the low frequency band. Other faults are similar. This shows that under normal no fault condition, the peak value and energy accumulation will only appear at the standard frequency of 50 Hz in the time spectrum; when all kinds of short circuit faults occur, the amplitude changes with time and frequency, in the fault time domain, the energy at 50 Hz is missing, and small energy accumulation occurs in the low frequency band.
4 Conclusions From the above analysis, it may be drawn that when there is a short-circuit fault in the grid, the measured voltage is processed by HHT, and the results in the non time domain are quite different from those in the normal non fault condition. (1) The amplitude of voltage waveform will decrease obviously in the fault section, and the waveform jitter is obvious at the time of fault application and removal in the eigenfunction modal image. (2) The instantaneous amplitude will drop sharply after the fault starts, until the amplitude increases and returns to the normal value after the fault ends. (3) The instantaneous frequency fluctuates violently at the beginning and the end of the fault. (4) The amplitude of Hilbert spectrum changes with time and frequency. There is energy loss at the frequency of 50 Hz, and there is energy peak and energy accumulation in low frequency band. (5) In the marginal spectrum, there are not only high spectral lines at the power frequency of 50 Hz, but their weighted amplitudes are much smaller than the normal ones. There are also spectral lines with high non-zero height in the lowfrequency band, that is, there are weighted amplitudes in the low-frequency band. The results of the experiments show that the proposed non time domain fault detection method is effective for the distribution network when renewable energy generation connected with it. Acknowledgements. This work was supported by 2019-HUZJTKJ-17.
References 1. de Mattos, L.M.N., Tavares, M.C., Mendes, A.M.P.: A new fault detection method for single-phase autoreclosing. IEEE Trans. Power Delivery 35(3), 87–102 (2018) 2. Huang, S.J., Hsieh, C.T.: High-impedance fault detection utilizing a Morlet wavelet transform approach. IEEE Trans. Power Delivery 14(4), 1401–1410 (1999) 3. Doorwar, A., Bhalja, B., Malik, O.P.: A new internal fault detection and classification technique for synchronous generator. IEEE Trans. Power Delivery 34(2), 739–749 (2019)
284
J. Cao et al.
4. Cui, Q., El-Arroudi, K., Weng, Y.: A feature selection method for high impedance fault detection. IEEE Trans. Power Delivery 34(2), 673–690 (2019) 5. Hashemnia, N., Masoum, M.A.S.: Online transformer internal fault detection based on instantaneous voltage and current measurements considering impact of harmonics. IEEE Trans. Power Delivery 32(7), 332–349 (2017) 6. Kumar, D.S., Savier, J.S., Biju, S.S.: Micro-synchrophasor based special protection scheme for distribution system automation in a smart city. Protection Control Mod. Power Syst. 5(1), 97–110 (2020) 7. Song, G., Hou, J., Guo, B., Chen, Z.: Pilot protection of hybrid MMC DC grid based on active detection. Prot. Control Mod. Power Syst. 5(1), 1–15 (2020). https://doi.org/10.1186/ s41601-020-0152-2 8. Mishra, S.K., Tripathy, L.N.: A critical fault detection analysis & fault time in a UPFC transmission line. Prot. Control Mod. Power Syst. 4(1), 1–10 (2019). https://doi.org/10.1186/ s41601-019-0117-5 9. Casagrande, E., Woon, W.L., Zeineldin, H.H., et al.: A data mining approach to fault detection for isolated inverter-based microgrids. Gener. Transm. Distrib. 7(7), 745–754 (2013) 10. Andrade, M.A., Messina, A.R., Rivera, C.A., et al.: Identification of instantaneous attributes of torsional shaft signals using the hilbert transform. IEEE Trans. Power Syst. 19(3), 1422– 1429 (2004)
Fault Transient Signal Analysis of UHV Transmission Line Based on Wavelet Transform and Prony Algorithm Mingjiu Pan1, Chenlin Gu1, Zhifang Yu1, Jun Shan1, Bo Liu1, Hanqing Wu2, and Di Zheng2(&) 1
Economic and Technical Research Institute of Zhejiang Electric Power Corporation, Hangzhou, China 2 China Ji Liang University, Hangzhou, China [email protected]
Abstract. In this paper, the Mallat signal analysis of wavelet transform is used to segment the signal, and then Prony algorithm is used to fit the segmentation algorithm. This method has the advantages of clear section distinction, strong anti-interference, fast calculation response and strong robustness. In the verification stage, the anti-interference robustness of the wavelet is tested by adding noise signals with different signal-to-noise ratio, and the ideal segmentation results are obtained by using wavelet analysis. Prony algorithm can be used to analyze and calculate each sub segment according to the segmentation. Keywords: Wavelet analysis
Prony algorithm Fault analysis
1 Introduction This paper presents a method of power system parameter fitting based on wavelet analysis waveform structure and Prony algorithm. Prony method can directly obtain the amplitude, phase, damping ratio, frequency and other important information of the signal from the time domain, without solving the frequency domain response, greatly reducing the amount of calculation. Moreover, this method is a nonlinear high latitude filtering method with high precision; the wavelet analysis method can well analyze the low-frequency and high-frequency signals in the signal, and find the sudden change of the signal The signal is segmented [1–3]. The method adopted in this paper is to use wavelet analysis to deconstruct the measured waveform which eliminates the coarse data, and process the original signal in sections, which not only retains the information contained in the original signal, but also greatly reduces the decline of fitting accuracy caused by signal mutation. Then, Prony algorithm is used to calculate the numerical results of the signal, so as to realize the accurate identification of fault nature [4–7]. Based on the original Prony algorithm, the signal is segmented by wavelet analysis. The fitting error is greatly reduced. Reference [1] points out that the generalized Prony algorithm, the new single input Prony Algorithm and the new multi input Prony algorithm are used to process the © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 285–291, 2021. https://doi.org/10.1007/978-981-33-4572-0_42
286
M. Pan et al.
fitting signal and the real signal. At the same time, the algorithm can significantly improve the accuracy of fitting and greatly shorten the time of system iterative calculation. The available signal order and signal fitting results are better than the traditional algorithm, which is suitable for real-time fault identification. To sum up, many institutions at home and abroad have studied the characteristics of Prony algorithm, and proposed fault signal analysis algorithms such as generalized Prony, neural network algorithm, clustering recognition, etc., but there are still some problems such as low fitting degree due to the single feature quantity and decomposition method selected for identification [8–10].
2 The Principle of Wavelet Transform and Piecewise Prony Algorithm and its Application in This Design 2.1
Prony Algorithm
Prony proposed that the exponential form of linear combination can be used to fit the discrete signal data with equal spacing. The expression is as follows: f ðxÞ K1 ea1 x þ K2 ea2 x þ . . . þ Kn ean x
ð1Þ
The traditional Fourier analysis of this kind of data assumes that it is composed of a series of sinusoidal signals with attenuation, certain amplitude, certain phase, certain frequency and certain attenuation. According to the above, the expression is as follows: xðtÞ ¼
q X
Ai eai t cosð2pfi t þ hi Þ
ð2Þ
i¼1
Where: represents the signal amplitude; represents the attenuation coefficient, which is a negative number; represents the phase angle of the signal, in rad; and represents the frequency of the signal, in Hz. By observing the above formula, we can know that the value of the nth sampling point of the signal can be expressed as follows: _
xðnÞ ¼
q X
Ai eai nDt cosð2pfi nDt þ hi Þ
ð3Þ
l¼1
Where: represents the sampling time gap of the signal. Assuming that the above formula consists of two decaying free components and decaying forced cosine function components, the forced components are expanded by Leonhard Euler formula cosð2pfi t þ hi Þ ¼
ejð2pfi t þ hi Þ þ ejð2pfi t þ hi Þ 2
ð4Þ
Fault Transient Signal Analysis of UHV Transmission Line
287
Furthermore, let, we can get the signal function expression in discrete form as follows: _
xðnÞ ¼
p X m¼1
bm z m n
ð5Þ
n ¼ 0; 1; 2; . . .; N 1 In order to make the expression more universal, it is assumed that the form of sum is complex, and its expression is as follows:
bm ¼ Am ejhm zm ¼ eðam þ j2pfm ÞDt
ð6Þ
Where: represents the signal amplitude; represents the attenuation coefficient, which is a negative number; represents the phase angle of the signal, in rad; represents the frequency of the signal, in Hz; and represents the sampling time gap of the signal. In order to make the approximate signal generated by Prony algorithm closest to the original signal, the generalized scheme is the principle of least square error, that is, the following formula is used: min½e ¼
N 1 X xðnÞ _xðnÞ2
ð7Þ
n¼0
According to the above judgment principle, a nonlinear least square method equation system can be solved. After many times of cyclic iteration, the size of each coefficient can be solved. The specific derivation process is not repeated. Finally, the mathematical expression of its main parameters is obtained by multiple convolution iteration as follows: 8 Ai ¼ jbi j > > > =ðbi Þ > > < hi ¼ arctan½ > > > =ðb Þ > arctan½ 0.76%), and the marine shale is dominated by III kerogen, which is in the evolution stage of low maturity to high maturity [8]. The oils in the Qaidam Basin came from reductive salt fluvial mud stones
Application of Big Data Analysis in Choosing Optimal Cross-plots
391
of Paleogene and Neogene, which is in the mature evolution stage [9]. For the abundance of MDBFs isomers of above samples, the relative content of 4-MDBF is 33.6%– 46.5%, with an average of 39.9%; the relative content of (2 + 3) - MDBF is 25.3%– 41.1%, with an average of 34.7%; 1-MDBF has a relative content of 20.4%–29.4%, with an average of 25.4%. The above three peaks showed an asymmetric “V” shape. Devonian coal samples in the Junggar basin are called horny coal because they contain abundant cuticle. Cuticle in coal can account for more than 75% of the total maceral contents. Under the microscope fluorescence irradiation, the cuticle appears light brown and is in the mature evolutionary stage; it is deposited in the weak reduction and weak oxidation sedimentary environment with marine and continental interaction [10]. The crude oils of the Ordos Basin were derived from fresh water lacustrine rocks of Yanchang Formation of Triassic system. The source rocks were formed in the sedimentary environment with weak oxidation and reduction, and the crude oil was in the mature to highly mature evolutionary stage [11]. The Paleogene freshwater lacustrine rocks in the Bohai Bay Basin are dominated by II/III kerogen, which were developed in a partial reductive environment, and were early maturity to maturity [12]. The source of light oil samples in the Beibuwan Basin is fluvial deltaic source rocks of Paleogene Liushagang Formation, and the crude oil is in a mature to highly mature evolutionary stage [13]. In the composition of MDBFs of above geological samples, the relative content of 4-MDBF is 20.2%–38.1%, with an average of 31.4%; the relative content of (2 + 3)-MDBF is 39.9%–59.5%, with an average of 47.6%; the relative content of 1-MDBF is 18.7%–27.4%, with an average of 21.1%. The above three peaks showed an inverted “V” shape.
4 The Application of Big Data Analysis in Optimizing Cross-plots Hughes and colleagues first proposed to use Pr/Ph and DBT/P cross-plot to identify sedimentary environment and lithologic characteristics of organic matters [4]. Here, the 55 samples obtained by GC-MS were put into the Pr/Ph vs. DBT/P cross-plot. In this cross-plot (Fig. 1), lacustrine shale and marine shale cannot be well distinguished. The Pr/Ph vs. DBT/P correlogram has some limitations. On the basis of Pr/Ph vs. DBT/P correlation diagram, Radke and co-workers proposed the Pr/Ph vs. ADBT/ADBF template to indicate sedimentary environment and lithologic characteristics of geology samples [1]. The results show that the crude oil derived from carbonate rocks can be clearly distinguished. However, the shales originated from lacustrine facies and marine facies deposition environment cannot be distinguished (Fig. 2). The 55 samples obtained by GC-MS were put into the Pr/Ph vs. (1 + 4)-/(2 + 3)MDBF cross-plot proposed by Yang and others [3]. The values of (1 + 4)-/(2 + 3)MDBF for Ordovician, Paleogene, Neogene and Cretaceous shales are 1.43–2.59, with an average of 1.97; Pr/Ph values are 0.47–2.01, with an average of 0.96. The Cretaceous marine shale and Ordovician marine crude oil are located in area 1A and area 1C respectively, which are consistent with the cross-plot, indicating that the cross-plot can better distinguish Ordovician crude oil and Cretaceous shale samples. The (1 + 4)-/
392
B. Meng et al.
Fig. 1. Diagram of DBT/P - Pr/Ph in geological samples
Fig. 2. Diagram of ADBT/ADBF - Pr/Ph in geological samples
(2 + 3)-MDBF values of Devonian and Paleogene source rocks and Triassic and Paleogene crude oil samples are 0.68–1.50, with an average of 1.13; Pr/Ph values are 0.64–4.26, with an average of 1.77. The Paleogene freshwater lacustrine crude oil is located in area 3; Triassic freshwater lacustrine crude oil and Paleogene lacustrine shale are in area 2, which are consistent with the cross-plot. It shows that the cross-plot can
Application of Big Data Analysis in Choosing Optimal Cross-plots
393
well distinguish Triassic crude oil, Paleogene source rock and crude oil samples. In addition, Paleogene - Neogene salt fluvial crude oil is located in area 1C; Devonian coal samples in Junggar basin are distributed in area 2. Salt lake and coal samples have not been studied by Yang et al. [3]. The Pr/Ph and (1 + 4)-/(2 + 3)-MDBF cross-plot can well distinguish the sedimentary environments of organic matters except salt-lake and coal samples in this study (Fig. 3).
Fig. 3. Diagram of (1 + 4)-/(2 + 3)-MDBF - Pr/Ph in geological samples
5 Conclusion Based on big data statistics, the geochemical feature of 55 samples from different ages and depositional environments were analyzed by software. The results showed that the shales originated from lacustrine facies and marine facies deposition environment cannot be distinguished effectively in the diagram of Pr/Ph - DBT/P and Pr/Ph ADBT/ADBF. As a comparison, the diagram of Pr/Ph - (1 + 4)-/(2 + 3)-MDBF can well distinguish various samples derived from different deposition environment. The cross-plot of Pr/Ph vs. (1 + 4)-/(2 + 3)-MDBF is more suitable for reservoir geochemistry. The method of big data processing can effectively optimize geochemical information. Acknowledgments. This work was supported by National Natural Science Foundation of China (Project No.: 41802179), the Foundation for Projects of the Sichuan Science and Technology Program (Project No.: 2019YFH0037), and the Central Public-interest Scientific Institution Basal Research Fund of China (Project No.: 1610012018008, 2060302-022-20-010).
394
B. Meng et al.
References 1. Radke, M., Vriend, S.P., Ramanampisoa, L.R.: Alkyldibenzofurans in terrestrial rocks: influence of organic facies and maturation. Geochim. Cosmochim. Acta 64(2), 275–286 (2000) 2. Li, M.J., Ellis, G.: Qualitative and quantitative analysis of dibenzofuran, alkyldibenzofurans, and benzo[b]naphthofurans in crude oils and source rock extracts. Energy Fuels 29, 1421– 1430 (2015) 3. Yang, L., et al.: Phenyldibenzofurans and Methyldibenzofurans in source rocks and crude oils, and their implications for maturity and depositional environment. Energy Fuels 31(3), 2513–2523 (2017) 4. William, B., Albert, G., Leon, I.P.: The ratios of dibenzothiophene to phenanthrene and pristane to phytane as indicators of depositional environment and lithology of petroleum source rocks. Geochim. Cosmochim. Acta 59(59), 3581–3598 (1995) 5. Fenton, S., Grice, K., Twitchett, R.J., et al.: Changes in biomarker abundances and sulfur isotopes of pyrite across the Permian-Triassic (P/Tr) Schuchert Dal section (East Greenland). Earth Planet. Sci. Lett. 262, 230–239 (2007) 6. Chen, Z., Simoneit, B.R., Wang, T.G., et al.: Biomarker signatures of Sinian bitumens in the Moxi-Gaoshiti Bulge of Sichuan Basin, China: geological significance for paleo-oil reservoirs. Precambr. Res. 296, 1–19 (2017) 7. Song, D.F., Wang, T.G., Li, M.J.: Geochemistry and possible origin of the hydrocarbons from Wells Zhongshen1 and Zhongshen1C, Tazhong Uplift. Sci. China Earth Sci. 59(4), 840–850 (2016) 8. Xiao, H., et al.: Geochemical characteristics of Cretaceous Yogou Formation source rocks and oil-source correlation within a sequence stratigraphic framework in the Termit Basin, Niger. J. Petrol. Sci. Eng. 172, 360–372 (2019) 9. Pan, C.C., et al.: Confined pyrolysis of Tertiary lacustrine source rocks in the Western Qaidam Basin, Northwest China: implications for generative potential and oil maturity evaluation. Appl. Geochem. 25(2), 276–287 (2010) 10. Song, D., Simoneit, B.R., He, D.: Abundant tetracyclic terpenoids in a Middle Devonian foliated cuticular liptobiolite coal from northwestern China. Org. Geochem. 107, 9–20 (2017) 11. Li, D.L., et al.: Study on oil-source correlation by analyzing organic geochemistry characteristics: a case study of the Upper Triassic Yanchang Formation in the south of Ordos Basin, China. Chin. J. Geochem. 35(4), 1–13 (2016) 12. Li, M.J., et al.: Ternary diagram of fluorenes, dibenzothiophenes and dibenzofurans: indicating depositional environment of crude oil source rocks. Energy Explor. Exploit. 31(4), 569–588 (2013) 13. Li, M.J., Wang, T.G., Liu, J., et al.: Biomarker 17a (H)-diahopane: a geochemical tool to study the petroleum system of a Tertiary lacustrine basin, Northern South China Sea. Appl. Geochem. 24, 172–183 (2009)
Application of Modern Computer Information Technology in “Uyghur Language” Teaching Parezhati Maisuti(&) Northwest Minzu University, Lanzhou, Gansu Province, China [email protected]
Abstract. With the rapid development of computer teaching technology, it can not only improve the efficiency of classroom teaching and enhance the interaction between teachers and students, but also effectively activate the classroom atmosphere and increase students’ interest in learning. As an important development direction of computer technology in the future, its application prospects in classroom teaching are very broad. By enabling knowledge to be presented in front of students more three-dimensionally, creating a more realistic classroom teaching atmosphere for students, achieving visual and auditory double stimulation, so that students’ learning initiative and learning effects are comprehensively enhanced. This article analyzes the application in computer teaching, and then discusses computer-assisted teaching and cultivating students’ innovative thinking ability. Keywords: Computer Information technology Uyghur language Language teaching
1 Introduction With the development of the times and the reform of modern information technology, the popularity of the Internet has become very high. The application of modern information technology has penetrated into all aspects of people’s lives. The scientific and reasonable application of it in modern teaching, the reform of traditional teaching methods, and the application of information technology to actual classroom teaching can enable both teachers and students in the teaching process. Therefore, we will focus on the practical application of information technology in the classroom, and through the application in language classroom teaching, we can make the content of the teaching more realistic and vivid to the students, let the students feel immersive, and improve the students. The visual and auditory stimulation enables students to be more involved in teaching activities [1].
2 The Auxiliary Role of Computers in Teaching Computer-assisted teaching is a teaching method that assists students and teachers to complete discussion, lecture, review and other teaching links in the education process through computers. Its essence is to disseminate teaching information to students with © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 395–401, 2021. https://doi.org/10.1007/978-981-33-4572-0_58
396
P. Maisuti
more abundant means and efficiently [2], as shown in Fig. 1. In terms of its extension, computer-assisted teaching includes five modes. One is the application in student practice. It mainly refers to students completing various tasks assigned by the teacher through a computer system. The computer system provides students with the best results through intelligent analysis. The method of disintegration, and give reasonable supplementary exercises for each student’s own weaknesses, and on this basis, provide each teacher with a learning plan for each student’s characteristics in future teaching [3].
Fig. 1. Computer-aided teaching model
2.1
Computer-Aided Guidance to Students
In the traditional way of education, due to time and energy constraints, the guidance time for each teacher to students is always limited. Due to the different level of understanding of the knowledge taught by the teacher, each student has different problems with the content taught by the teacher, and the teacher cannot be comprehensive [4]. However, through computer-assisted teaching, the computer itself can give different guidance according to each student’s weakness, which largely compensates for the teacher’s inconsistent teaching due to time and energy constraints. 2.2
The Application of Computer-Assisted Communication Between Students
Teaching is not only a rigid answering and solving process, but also a process of forming a healthy personality for students through the education of emotion, will and reason. The answer to the exercises seems to be a rigid training process, but it also includes the life process of students asking questions, overcoming difficulties, and solving problems. In this process, students will inevitably have various emotional fears [3]. Computers can communicate well with students through artificial intelligence, and in this process, promote students to form a good attitude towards life and correct study habits.
Application of Modern Computer Information Technology
2.3
397
Computer-Assisted Application in Teacher Teaching
In computer-assisted teaching, teachers can use computer-designed games to allow students to actively accept various knowledge points that the textbook wants to convey in the fun learning. At the same time, various complex formulas in physics, chemistry and mathematics can also be simulated by computer in the form of images and sounds, so that students can understand the inner meaning of these formulas more vividly, and they can more directly understand the meaning of these formulas [5]. 2.4
Computer-Assisted Teaching Is Used in Problem Solving Process
Traditional teaching methods often only pay attention to the importance of answering questions. As long as one answering method can get the correct answer, even if this answering method is boring, it is good. However, through computer-assisted teaching, you can show the pros and cons of multiple solving methods, so that students can choose the most suitable method for solving problems during the learning process [5].
3 Advantages of Computer-Assisted Teaching The essence of computer-assisted teaching is to disseminate teaching information to students efficiently with richer means. From this point of view, computer-assisted teaching has the following three advantages (shown in Fig. 2):
Fig. 2. The advantages of computer-assisted teaching
3.1
The Richness of Educational Information Dissemination
Traditional education’s information dissemination method is often based on blackboard writing and exercises [4]. This information dissemination method is mainly a way of disseminating abstract information from concept to concept. In this teaching method, teachers mainly focus on the explanation of concepts, and the dissemination of knowledge is often realized through enumeration and noun interpretation. In this teaching method, it is often difficult for students to understand what some difficult
398
P. Maisuti
concepts are saying. Because this method of knowledge dissemination wants students to realize the most essential connotation of knowledge at the beginning of teaching. However, the process of human cognition is a process that slowly deepens from the surface [6]. Contrary to traditional teaching methods, computer-assisted teaching uses pictures, animations, videos, and sounds to allow students to understand knowledge from appearances, so that students can obtain all the graphical expressions of knowledge from the beginning [6]. 3.2
The Efficiency of Educational Information Dissemination
Due to the limitation of the teacher’s time, energy and educational resources, traditional education methods often fail to enable every student to effectively understand the meaning of knowledge itself [6]. However, through computer-assisted teaching, the system can automatically identify the difficulties and perplexities of each student in the education process. And give the most effective solution to these difficulties and puzzles. In this way, the efficiency of education and teaching is greatly improved [7]. 3.3
Computer-Assisted Teaching Can Stimulate Students’ Enthusiasm for Learning
The traditional teaching method is boring because it is from concept to concept. Computer-assisted teaching includes pictures, sounds, animations and other information dissemination methods, and provides different learning plans for each student. The concealed teaching methods are undoubtedly more targeted and more interesting, that make students better enter the classroom [7].
4 The Application of Computer Information Technology in “Uyghur Language” Teaching 4.1
Use Information Technology to Improve Students’ Interest in Learning
In the past teaching, the teaching mode was usually used to fill the ducks, that is, the teacher explained the content to the students, and the students learned passively and mastered the learning content by rote memorization. If the student’s memory is good, then they can temporarily remember the learning content. If the student’s memory is poor, it will not be able to fully grasp the learning content, and even cause a sexual cycle, which will seriously affect the student’s interest in learning [8]. Therefore, in order to increase students’ interest in learning and enable students to participate more actively in Uyghur language teaching, teachers must give full play to the role of information technology, increase the interest of Uyghur language teaching, stimulate students’ enthusiasm for learning, and enliven the classroom [8]. For example, using PowerPoint to play videos and pictures can lead students into a specific historical environment to perceive and think about the causes and effects of historical events. This also highlights the cultivation of basic abilities in the historical
Application of Modern Computer Information Technology
399
core literacy of Uyghur language. By watching the pictures, students let their perspective impact their hearts and think about the history of Uyghur language in their sentiments. All this makes the classroom alive and greatly mobilizes the students’ reading, speaking, and perception abilities. 4.2
To Cultivate Students’ Uyghur Language Ability by Using Information Technology
At the student stage, students have poor memory and it is difficult to firmly grasp language knowledge. Therefore, in order to improve the learning effect of students, teachers must give full play to the role of information technology, strengthen the training of students’ language expression, and consolidate the knowledge that students have learned. In this case, teachers can use information technology rationally and scientifically use the intelligent language teaching aid system [9]. After explaining the knowledge points for the students, the teacher should guide the students to use the intelligent language teaching aid system to carry out language expression training. In the training process, help students better understand the connotation of language, consolidate the knowledge they have learned, and deepen the understanding and memory of knowledge [9]. 4.3
Use Electronic Whiteboards to Interact with Students
If PowerPoint shows the teacher’s knowledge to students, then the whiteboard is a bridge for interaction between teachers and students. With the development of the social economy and the implementation of the national double-high and double popularization, local governments at all levels have invested unprecedentedly in education, and interactive whiteboards have also entered our classrooms [10]. It has and surpasses all the functions of the traditional blackboard, including free writing, drawing, and emphasizing points, and can use or edit colorful electronic courseware. In the application of teaching, it enriches the colorful multimedia resources and can highlight the student’s dominant position. 4.4
Reasonable Use of Network Resources
We collectively refer to all kinds of education and teaching content related to education and teaching content such as teaching materials, multimedia courseware, theme learning resource packs, e-books, and special websites as digital resources. When students or teachers use these resources, they can use these resources only through mobile devices, such as laptops, tablets, and smart phones. These modern mobile devices provide us with convenience in teaching, but because students have weak selfcontrol ability, they are easily affected by games and are addicted to games, causing unpredictable losses [11]. At this time, our teachers are required to guide students well, and parents should strengthen the management of these devices, so as to prevent the negative effects of these devices on students [10].
400
4.5
P. Maisuti
Using Information Technology to Build a Good Teacher-Student Relationship
In Uyghur language education and teaching, the teacher-student relationship plays an important role [11]. If the teacher-student relationship is more harmonious, it can stimulate students’ learning enthusiasm and improve their learning efficiency. Therefore, in Uyghur language teaching, teachers should also give full play to the role of information technology to make the classroom teaching atmosphere more active and relaxed. In addition, it can also strengthen the communication and interaction between teachers and students, and enhance the emotions between teachers and students. They can actively communicate and interact with the teacher, and they will be more serious in the classroom listening process, which can better promote the smoothness of language teaching activities [12], and teachers can timely grasp the problems and difficulties faced by students in study and life, and understand Students’ living and learning conditions, and timely troubleshooting for students, can establish a good teacherstudent relationship, enhance the relationship between teachers and students, and cultivate students’ enthusiasm for learning [12].
5 Conclusion All in all, in Uyghur language education and teaching, information technology plays an important role. Teachers should combine the guidelines for student education and the actual conditions of students, and rationally use information technology in students’ Uyghur language teaching. Through pictures, music and animation, students’ curiosity is aroused, their thirst for knowledge is improved, students’ logical thinking is cultivated, students’ Uyghur language expression ability is improved, and the overall development of students’ Uyghur language ability is promoted. Acknowledgments. Project: Central University project; phased achievements of “sociocultural studies of Uyghur kinship appellations” (Project number: 31920150136). Talent introduction program: The phased achievement of “Meaning and Usage of the Uyghur Restricted Tone Auxiliary words - La”, (Project number: Z16006).
References 1. Yi, L.: The concept and practice of computer-assisted teaching. China Continuing Med. Educ. 4, 84–86 (2017). (in Chinese) 2. Baojiao, L.: A new idea of computer-assisted teaching in school classrooms. Educ. Teach. Forum 2(03), 55–57 (2019). (in Chinese) 3. Ning, Z., Yingfang, F.: Application research of digital virtual technology in clinical teaching of hepatobiliary surgery. China Continuing Med. Educ. 10(33), 16–19 (2018). (in Chinese) 4. Hao, C., Meiping, L.: Talking about the application of VR in the teaching of “Architectural Construction”. Jiangxi Build. Mater. 6(13), 54–55 (2018). (in Chinese) 5. Zhao Xi, H., Wenhua, Z.X.: Development and practical application of a certain radar virtual maintenance training system. Educ. Teach. Forum 8(49), 76–78 (2018). (in Chinese)
Application of Modern Computer Information Technology
401
6. Hang, Z., Canheng, Z.: Research on teaching experimental methods of civil aviation security inspection technology courses based on VR technology. Educ. Teach. Forum 12(49), 273– 274 (2018). (in Chinese) 7. Shuhui, L.: Computer-assisted teaching in colleges and universities. Sci. Technol. Econ. Tribune 22, 181–184 (2017). (in Chinese) 8. Hongwen, W.: The use of information technology in language classroom teaching. Educ. Teach. Forum 3(26), 39 (2013). (in Chinese) 9. Zhigang, C.: The use of information technology in classroom teaching. Mod. Educ. Sci. 12(10), 117–119 (2017). (in Chinese) 10. Jiehong, Y.: The application of information technology in political teaching in junior high schools. Educ. Inf. Technol. 11(7), 149–151 (2017). (in Chinese) 11. Xiaoxia, C.: The application of modern information technology in Uyghur language teaching. Exam Wkly 6(10), 110–114 (2018). (in Chinese) 12. Wenjuan, L.: Talking about the application of modern information technology in Uyghur language teaching. Exam Wkly 9(29), 135–137 (2018). (in Chinese)
Research and Design of Mobile Assistant Class Management System Juan He(&) and Fuchen Leng Wuhan Technology and Business University, Wuhan, Hubei Province, China [email protected]
Abstract. With the development of mobile Internet, personal mobile phone has become the biggest interference factor in traditional classroom teaching. However, the prohibition of carrying mobile phones in class not only intensifies the contradiction between students and managers, but also contradicts the general trend of the development of mobile Internet. Therefore, it is a better way to use students’ personal mobile phones to assist classroom management and improve teaching effect. In order to reduce the interference of mobile phone to normal classroom order, this paper designs “mobile phone aided class management system”. teachers enter the system through the teacher-end website, they can upload courseware before class, design class questions; on class they can call the roll, ask questions and interact; after class they can input homework results. Through the student-end app into the system, students could carry out classroom activities such as sign in, answer questions, and interact in class. Their mobile phones will be monitored by the system, if they leave the system for more than 6 min, their peacetime results will be automatically deducted, and the points deduction notice will be sended to the student-end app; after class they can download courseware for preview or review, check their usual performance trends whenever and wherever they like. The system aims to enable students to use mobile phones to follow the teacher’s teaching ideas, reduce mindwandering time, enhance communication and interaction, and create a new classroom model in the mobile Internet environment. Keywords: Teaching assistance monitoring
Class management Mobile phone
1 Introduction With the development of mobile Internet, “Phubber” appears frequently in classroom, and smartphone has become the biggest interference factor in classroom teaching [1]. In view of this phenomenon, many colleges and universities have issued some management regulations, such as strictly prohibited to carry mobile phones during class, or set up mobile phone bags on the platform. Each student entering the classroom must make the phone mute and put it into the mobile phones bag, until class is over. However, this kind of “blocking” method is not only easy to intensify the contradiction between students and teachers, but also has a negative impact on the class attendance, and it also causes inconvenience to the students who are used to find knowledge points or take notes by mobile phones in class [2]. “Timely guidance, timely reminder” is the © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 402–408, 2021. https://doi.org/10.1007/978-981-33-4572-0_59
Research and Design of Mobile Assistant Class Management System
403
best solution, but the teaching time is limited in class, especially the large number of university students number, some large-scale elective courses teaching even up to two or three hundreds people, if the teacher remind every student don’t using mobile phone, it will inevitably affect the progress of teaching, but if they are not interfered, it’s unfair obviously. Therefore, how to standardize the behavior of students using mobile phones in class, not only to remind them in time, but also not to interrupt normal teaching has become an urgent problem in classroom teaching at this stage.
2 System Requirements Analysis 2.1
Needs Survey
In this study, literature analysis, interview and questionnaire were used to investigate and analyze the use of mobile phones in class [3]. By consulting a large number of literature on the use of mobile phones in the classroom of college students in China, the form and content of the questionnaire are preliminarily determined. Then a small group of teachers and students were called to have a small discussion, the content of the questionnaire was modified and supplemented [4]. The main subjects were freshmen to senior students of Wuhan Technology and Business University, and 938 valid questionnaires were collected. 2.2
Findings
This survey focuses on the current situation of students’ mobile phone use in class and students’ expectation of classroom mobile phone use. For the current situation of students’ use of mobile phones in classroom, the survey found that only 5.12% of students have not used mobile phones in class. Among the students who used mobile phones in class, 15.03% of the students used mobile phones for less than 5 min, 38.06% for 5–10 min, 29.85% for 10–30 min, and the remaining 17.06% said they could not stop at all. It can be seen that students use mobile phones in class is very common. The use of mobile phones in class was 27.11% for checking information, 19.91% for chatting, 11.64% for reading news, 11.14% for others, 10.66% for playing games, 9.82% for reading novels and 9.71% for shopping. 61.74% of the uses were unrelated to classroom learning (including chatting, reading news, playing games, reading novels and shopping). For students’ expectations of using mobile phones in classroom, the survey found that 60.98% of students think teachers should manage this phenomenon, 14.61% of students think that teachers should not manage, and 24.41% of students think it doesn’t matter. Among the students who think it should be managed, 10.74% of the students think it should be completely prohibited, 68.04% of the students think it should be timely reminded, and 21.22% of the students think it should be guided patiently. It can be seen that most of the students think that they lack self-discipline in using mobile phones in class and need teachers’ supervision and guidance. Finally, the students’ expectation of the positive impact of mobile phones on the classroom is statistically analyzed (this question is multiple choice). Among the 10 options listed, they are
404
J. He and F. Leng
reading the courseware at any time (74.71%), checking in (64.76%), reminding themselves to pay attention to the classroom content (57.1%), answering classroom questions with mobile phones (54.32%), and understanding the daily performance (50.25%). The proportion of these five items is more than 50%. It can be seen that these five are the pain points that students feel in the current classroom teaching method, hoping to solve these problems through the mobile phone assisted classroom teaching.
3 Design of a Mobile Assistant Classroom Management System The five pain points mentioned above can be divided into three categories: checking in and reminder functions belong to class management, consulting courseware and answering questions belong to teaching assistant class, and peacetime performance dynamic belongs to achievement record class. Therefore, this paper makes the following design for the mobile assistant class management system: 3.1
System Architecture Design
As the system has two different user groups of students and teachers, the system has two different entrances: the student-end and the teacher-end [5]. In the student-end, it aims to manage the use of mobile phones in class, which requires access to many permissions of mobile phones. Therefore, it is designed as a C/S architecture of mobile phone application (hereinafter referred to as app) + server, which is convenient to monitor the use of students’ mobile phones. The B/S structure of browser + server is adopted in the teacher-end, which makes it convenient for teachers to enter the management system whether in class, in office or at home, and work anytime and anywhere. The overall architecture of the mobile assistant class management system is shown in Fig. 1.
Fig. 1. Architecture diagram of manual assistant classroom management system
Research and Design of Mobile Assistant Class Management System
3.2
405
Functional Module Design
It can be seen from the above figure that the system mainly includes three functional modules: class management, teaching assistance and peacetime performance record. The contents and implementation methods of these three functional modules are described in detail below. 3.2.1 Class Management Module There are two main function points of classroom management module: mobile phone sign-in, mobile phone use monitoring. Before class, the teacher enters the system through the classroom multimedia equipment, fills in the course name, selects the class, sets the classroom location information, then clicks “start to sign in”, the students can realize the sign-in by logging in the app in the personal mobile phone. Teachers can see the sign-in results synchronously in browsers. If some students can not open the app login system for some reasons, the teacher can sign in directly in the teacher-end. To ensure the authenticity of the sign-in, the app provides a dual verification mechanism: first, location verification, app access to mobile phone location information when students log in, Within 30 m of the teacher’s classroom location (the range is 30 m by default, but can be changed by the teacher according to classroom size); second, scene verification, the system will randomly generate QR code, students can scan and identify it in the app. The reason why mobile phone sign-in adopts a dual verification mechanism is that college students’ learning consciousness is not high and they found substitutes to class sometimes. It is possible to through another student to forward the QR code to realize the remote sign-in, so it is necessary to cooperate with the location verification to supervise and check the sign-in behavior. When the student exits the app or transfers the app to the background, the system generates the following message in the mobile notification center: “you have left the system for 5 min and will be deducted 1 point after 1 min.” If it’s leaving the system for 6 min after the students have not re-logged in the system, the teacher-end system will pop up a message: “So-and-so leaves the system for 6 min, peacetime performance deduction of 1 point.” The reason for setting aside 6 min of buffer time is that students need to look for relevant learning materials or deal with urgent personal matters, but if they still do not return to the system after 6 min, that is mostly caused by personal subjective factors. Sometimes, because of the need to save electricity, students may turn off the phone screen. Turn off the screen directly when the app is running, the app will not turn into the background and become “leave” state. If some students can not turn on the app to login in the system for their mobile phones are out of power, the teacher can change their status to online in the system. If the teacher needs students to cooperate with mobile phones, he can also manually pauses the app monitoring. The process of mobile phone monitoring function is shown in Fig. 2. 3.2.2 Teaching Assistant Module There are three main function points of teaching assistant module: courseware synchronization, teacher-student interaction, classroom questioning:
406
J. He and F. Leng
Fig. 2. Flowchart of mobile monitor function
Teachers can log in the system before class and upload the course courseware. They can open the demonstration directly in the system during class, without taking the U disk with them. After students scan the QR code and log in to the app, the app will get the download link of the course courseware from the QR code, and the download will be displayed on the mobile phone. They can keep abreast of the teaching progress at any time and will not affect the learning effect for they can’t see the courseware clearly in the back row. If they need courseware, they needn’t queue up to copy it from the teacher, or request the teacher for an email, they can download it directly in the app. In the classroom, it is inevitable that students talk to each other, and teachers can not judge whether the content of the conversation is related to the course, and it is inconvenient to stop it directly. The student-end app provides “group chat” function, when teachers set up sign-in classes, all students and teachers on the sign-in list will form a temporary communication group. Every student is free to speak and discuss about the course without interrupting the teacher. If someone need to consult the teacher, he can also send a message to the teacher in the group. when an @ message from student is received, the teacher-end will pop up a message prompt. The teacher can choose to check the message immediately or later according to the situation of his lecture. The teachers can prepare some course-related questions before class (mainly objective questions, subjective questions cannot be counted on the spot), in class, the teacher-end select the “classroom questions” function, check the questions and click “start answer”, the student-end app immediately display questions and students can answer directly on the app. The teacher clicks “end answer”, the system can automatically read the paper, and give the analysis of the result of the answers (such as the number of students answering questions, correct rate, distribution rate of wrong answers, etc.). This function can fully eliminate the differences in classroom performance caused by students’ personality factors and teachers’ subjective preference, so that every student must seriously think about and answer the teacher’s questions
Research and Design of Mobile Assistant Class Management System
407
whether he is outgoing or no. Make teachers fully and objectively understand the overall learning situation of students, it will be easier to grasp the teaching difficulties. The courseware, interaction and questioning effects of the teaching assistant module are shown in Fig. 3.
Fig. 3. Effect of teaching assistant module
3.2.3 Performance Record Module In this system, the peacetime performance are mainly composed of three parts: attendance rate (30%), class performance (20%) and homework score (50%). Teachers can add new items to the system or adjust the proportion of each part. From the dotted arrow in Fig. 1, we can see that the input of the peacetime performance score mainly comes from the mobile phone sign-in, the mobile phone use monitoring, the class exercises and the homework score. The first three items are automatically completed by the system without the direct participation of teachers. The above score is the default of the system, and teachers can enter the personal center to adjust accordingly according to the course situation. Students can check their peacetime performance score in the app whenever.
4 Conclusion As the development of mobile Internet, more and more educators realize the great influence of mobile phone on education and teaching [6], and many related outstanding achievements have emerged—such as “Micro teaching assistant” [7] developed by Professor Tian of Central China normal University in 2016 and “rain classroom” [8] launched by Tsinghua University in 2017 which has the functions of class sign-in, class
408
J. He and F. Leng
test and class discussion. After deep experience, it is found that this kind of product has comprehensive function and strong interactivity, but it also needs students to have considerable consciousness. This kind of products are all based on WeChat platform in student-end [9], and have weak control over personal mobile phones. Therefore, for some private colleges or higher vocational colleges, the effect of teaching assistance is not obvious [10]. The innovation of mobile assistant class management system in this paper has two aspects: First, stronger mobile phone monitoring function. Second, to promote management with peacetime results. This system publish the peacetime score, so that students can get the changes of their usual grades after each class or even in the class, it can greatly improve the enthusiasm of the class, improve the teaching effect. Acknowledgments. Project fund: Guiding project of scientific research plan of Hubei Provincial Department of education in 2018 (Project number: B2018303).
References 1. Zhuli, W.: Teaching application of smart phones: confusion and thinking. Open Educ. Res. 23(1), 10–11 (2017). (in Chinese) 2. Shenglan, X.: Research on the application of smart phones into classroom teaching. Audio Vis. Educ. Res. 1, 86–91 (2018). (in Chinese) 3. Tingting, Z.: Research on college students’ mobile phone use under the background of media dependence theory. Media 22, 77–79 (2018). (in Chinese) 4. Xiuping, Z.: Investigation and analysis of the current situation of college students’ mobile phone use in classroom. China Adult Educ. 1, 66–69 (2017). (in Chinese) 5. Ying, S., Sean, S.: Design of distributed virtual reality practice teaching system based on personal intelligent terminal. Lab. Res. Explor. 39(2), 227–232 (2020). (in Chinese) 6. Qinglong, Z., Jingjing, Y.: Study on the transition of classroom teaching management supported by data intelligence. Audiov. Educ. Res. 41(7), 100–107 (2020). (in Chinese) 7. Tan Zhihu, H., Diqing, T.Y., et al.: Reconstruction of interactive teaching in large classes by micro teaching assistants. Mod. Educ. Technol. 28(1), 107–113 (2018). (in Chinese) 8. Shuaiguo, W.: Rain classroom: a smart teaching tool in the context of mobile Internet and big data. Mod. Educ. Technol. 5, 26–32 (2017). (in Chinese) 9. Gui, L., Xinghong, F., Ping, T.: Development of classroom teaching management system based on WeChat public platform. Mod. Educ. Technol. 6, 108–114 (2017). (in Chinese) 10. Jing, X.: The practice and exploration of mobile phone in the “Internet plus” classroom teaching of higher vocational. Fujian Tea 42(4), 243 (2020). (in Chinese)
A Brief Analysis of Wearable Electronic Medical Devices Yuxin Du(&) Dalian Maritime University, Dalian, Liaoning Province, China [email protected]
Abstract. With the rapid development of science and technology and human society in recent years, people’s pace of life is constantly improving, and the demand for fast and convenient high-performance products is constantly increasing. Wearable electronic medical devices are becoming more and more popular, among which the representative is the continuous blood glucose detection system which measures the glucose content in the interstitial fluid. This paper reviews the background and market demand of wearable electronic devices and the applications of CGMS and glucose sensors. Keywords: Wearable electronic devices
Glucose sensor CGMS ISF
1 Introduction Wearable medical devices refer to portable medical or health electronic devices that can be directly worn on the body to perceive, record, analyze, regulate, intervene or even treat diseases or maintain health conditions under the support of software. The real significance of wearable medical device lies in implanting and binding human body to identify the body’s posture characteristics and status. Keep track of our physical health, sports, metabolic conditions, still can let our dynamic and static digital life, physical characteristics, its real value lies in a body digital life, wearable medical equipment can real-time monitor blood glucose, blood pressure, heart rate, blood oxygen, temperature, breathing rate and other health indicators and the body of basic treatment. 2012 was called “the first year of smart wearable devices” due to the appearance of Google glasses, and since then, more and more products have been invented such as Apple Watch, Galaxy Gear and Nike smart sneakers. They are mostly in the form of portable accessories with partial computing functions that can connect to mobile phones and various terminals.
2 Main Purpose of Wearable Electronic Devices 2.1
Health Monitoring
In today’s society, people pay more and more attention to their health and that of their families. At the same time, the aging of the population and the shortage of medical © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 409–415, 2021. https://doi.org/10.1007/978-981-33-4572-0_60
410
Y. Du
resources also make people pay more attention to health care. Wearable electronics provide a convenient way to monitor health. At present, the common wearable electronic devices on the market include smart bracelet and smart watch, which are generally characterized by strong operability, easy portability and beautiful appearance. The main functions of these wearable electronic devices are step counting, blood sugar monitoring, and sleep quality monitoring. 2.2
Disease Treatment
Wearable electronic devices also have a certain function of adjuvant therapy, but this functionality is still on the stage of research and evaluation. To give some specific examples, patients at high risk for heart disease carry wearable external defibrillators that can defibrillate themselves in an emergency. In clinical practice, there are also many wearable exoskeletons used for rehabilitation assistance, such as hand exoskeleton, upper extremity exoskeleton and lower extremity exoskeleton robot. These devices can effectively help patients to carry out rehabilitation training and improve their rehabilitation training results. 2.3
Remote Rehabilitation
Wearable electronic devices can not only guide patients through rehabilitation training at home, but also help some patients who are stressed about going to the hospital to master their own physical conditions and guide patients to control their condition in time when problems arise.
3 Glucose Biosensor 3.1
Classic Glucozyme Electrode
In 1962, Clark and Lyons published the first paper on enzyme electrodes [1]. In 1967, Updik and Hicks developed glucose oxidase (GOD) electrode based on platinum electrode for the first time, which was used for quantitative detection of glucose content in serum [2]. This marks the birth of the first generation of biosensors. In order to avoid oxygen interference, Clark improved the device designed by him in 1970, so that the production of H2O2 could be measured more accurately, thus the content of glucose could be measured interlinked [3]. In 1972, Guilbault covered the platinum electrode with a selective membrane mixed with glucose oxidase, and after 10 months of storage, the stable current of the corresponding electrode only decreased by 0.1%, so as to produce a glucose biosensor with high stability and measurement accuracy [4]. The technology was used by Yellow Springs Instrument (YSI) of the us, which first developed the world’s first commercially available glucose sensor in 1975. At present, glucozyme electrode tester has a variety of models of goods, and widely used in many countries. The first glucose biosensor in China was successfully developed in 1986, and the main commercial products are SBA glucose biosensor [5]. In this sensor, immobilized glucose oxidase and hydrogen peroxide electrode were used to constitute the
A Brief Analysis of Wearable Electronic Medical Devices
411
enzyme electrode glucose biosensor analyzer. The amount of each injection was 25 L, and the glucose content in the sample could be measured at 20 s after the injection, with a good linear relationship within the range of 10*1000 mg L. The variation coefficient of 20 consecutive measurements was less than 2%. 3.2
Mesosomal Glucosase Electrode
Cass et al. [6] fixed GOD on the graphite electrode and took water insoluble ferrocene monocarboxylic acid as the medium. In the reaction of the electrode to glucose, ferrocene ACTS as GOD’s oxidant and rapidly transfers electrons between the enzyme reaction and the electrode process [7]. The ferrocene modified siloxane polymer was mixed with glucose oxidase to produce a more stable sensor with higher electron transfer rate. The commonly used electronic media include ferrocene and its derivatives, organic dyes, quinines and its derivatives, tetrafulfiene (TTF), tetramethoquinol dim ethane (TCNQ), fullerenes and conductive organic salts [8]. However, these organic low molecular intermediate compounds are easy to diffuse from the enzyme layer into the substrate solution, resulting in poor stability of the sensor, thus limiting the application range of the biosensor. One way to solve this problem is to use polymer-mediated compounds such as REDOX polymers such as transition metal ions and organic oxidation reductants [9]. Paul et al. [10] attached ferrocene and 1, 1’ -dimethyl ferrocene to the main chain of the insoluble siloxane polymer through chemical bonds, and used them as the electronic media of glucosase sensor. This polymer medium can effectively reduce the working potential of the sensor and eliminate interference from other electro active substances. The outstanding advantage of the sensor is that the response speed is fast and the time for the current to reach the steady-state value is less than 10 s. Zhu bonshang et al. [11] chose b-cyclodextrin polymer (b-cdp) formed by the condensation of b-cyclodextrin and glutaraldehyde as the main body, the electric medium, and 1’ 2-dimethyleneferrocitron as the object to form stable host and object envelope. The stability and service life of the sensor were significantly improved by crosslinking bovine serum albumin glutaraldehyde (bsa) with glucose oxidase and host and guest envelope onto the electrode. The heteropolypyrrole GOD film prepared by platinization of the platinum wire electrode showed good stability of the glucozyme electrode [12]. 3.3
Direct Glucosase Electrode
In 1992, Koopal et al. [13] used polypyrrole (PPy) microtubules to immobilize GOD. By means of template synthesis, pyrrole was polymerized on the gold electrode etched film, and then GOD was firmly adsorbed in the polypyrrole microtube to form the GOD PPy sensor. The track etching film used is usually composed of polyester and polycarbonate, with the GOD maintaining biological activity in the microtubules. Because polypyrrole, polythiophene and other conjugated polymeric conductive materials can form microtubules in the track etched-film hole, it is believed that this structure can connect the enzyme REDOX activity center with the electrode, thus made of biosensor has the characteristics of good selectivity and high sensitivity. Koopal et al. [14] further improved this method by using uniform latex particles as porous matrix materials
412
Y. Du
attached to polypyrrole and enzymes for fixation. The glucose biosensor thus made had a linear response range of 1*60 mmol L to glucose measurement. Zhang guolin et al. [15] prepared the immobilized glucoses electrode by using ethyl cellulose and acetylene black conductive composite material. The results showed that the conductive composite glucose oxidase biosensor with cyclohexane removal of paraffin had a granular structure, which was favorable for enzyme-catalyzed reaction. The Prussian blue (PB) membrane modified platinum-disk glucose electrode can effectively eliminate the interference of ascorbic acid and uric acid [16] Gold nanoparticles modified glucoses electrode has been widely reported. This is mainly because of gold nanoparticles good electrical conductivity, raw material compatibility, small size effect and quantum size effect, quantum tunnel effect and so on, make nanoparticles presented many peculiar physical and chemical quality, thereby significantly reducing the distance between the electron donor and receptor, raise the transmission rate between the electrodes and electronics. Cai Xinxia et al. [17] modified the thin film gold electrode with osmium REDOX polymer and horseradish peroxidase covalently and crosslinked it with glutaraldehyde to fix glucose oxidase to produce glucose sensor. The sensor in the 0. 1 v (vs. Ag AgCl) potential under the background of current less than 1 nA, the detection limit of 1 mu M, less than 400 l within the scope of sensitivity of 2900 nA mu mol (L) (correlation coefficient R = 0.998), has realized the low concentration of glucose determination, for the development of high sensitivity, low detection limit, high stability of noninvasive blood glucose detection sensor laid a foundation.
4 CGMS CGMS is a continuous and dynamic blood glucose monitoring system, which can timely detect abnormal blood glucose phenomena that are difficult to be detected by traditional monitoring methods, especially in monitoring asymptomatic hypoglycemia at night and postpartum hyperglycemia. CGMS recorder by cable, blood sugar, glucose sensor, information extraction and analysis software, the use of blood sugar and liquid glucose concentration has a good correlation between groups developed the principle, the sensor can be produced with interstitial fluid glucose in the chemical and electrical signals, the sensor is mainly composed of micro electrolysis of glucose oxidase and semi-permeable membrane, Portugal, more embedded in the abdomen or upper arm subcutaneous tissue. By detecting the chemical reaction of glucose in subcutaneous tissue fluid, the sensor can reflect the patient’s blood glucose level, and manually measure the peripheral fasting blood glucose once every morning for correction. Recorder worn (72 h) every 10 s to receive 1 telecom, daily can store up to 288 blood sugar, glucose monitoring range of 2.2*22.2 tendency/L, after the information extraction and downloaded to a computer for data analysis, can be obtained within 3 days of blood glucose in patients with continuous dynamic change information in clinical practice has very important practical value. In the monitoring of CGMS, glucose concentration among tissues is detected mainly by subcutaneous glucose probe, which is significantly different from traditional plasma and capillary probes. The probe head of the dynamic blood glucose monitoring system should be stored in an environment of 0*4 °C. Before installing the dynamic
A Brief Analysis of Wearable Electronic Medical Devices
413
blood glucose monitoring system on patients, it should be removed and placed in a room temperature environment for at least 30 min, and the alkaline battery should be replaced once every month. It is recommended that the patient implant the probe at about 2*3 h after breakfast or lunch, i.e. 9:00*10:00 or 14:00*15:00, to ensure that the patient is in a relatively fasting state at the end of initialization and to prevent calibration errors in the period when the patient’s blood glucose fluctuates greatly. Abdominal position is the first choice, the body is too thin, abdominal subcutaneous fat is too little or extremely flabby choose the side back fat rich place to implant, the probe implantation Angle of 45°*60° is appropriate. After the probe is fully soaked for 10– 15 min, connect the cable to the probe and fix it properly. The probe was started and initialized for 60 min. After straightening the cable, the instrument was properly placed in the pocket of the patient’s underwear. During initialization, the current value ranges from 5 to 200 nA and the fluctuation is stable and normal. The semi-permeable membrane, glucose oxidase and microelectrode are the three main sensors of the blood glucose test head. The interstitial fluid glucose first through a semipermeable membrane to probe, associated with internal glucose oxidase reaction, produce free electrons, at the same time, the current in the electric field, after a recorder to record the electrical signals, then you can indirectly calculate the interstitial fluid glucose concentration.
5 ISF There are two main categories of tissue fluid sugar determination. One is to use a weak current to make glucose permeate out of the skin, and then measure glucose, mainly by reverse ionization or micro dialysis. The GlucoWatch Biographer, a slightly larger watch worn on the wrist, is already on the foreign market. The new device automatically displays the amount of sugar every 10 min. The osmotic filler on the back of the meter can be used for 13 h at a time. Although this result is not accurate and the response is later than the blood glucose, for automatic continuous measurement, the chance of finding hypoglycemia or hyperglycemia is more than the occasional measurement of total capillary blood glucose (CBG). The FDA has said it cannot adjust insulin doses based on a single measurement. This glucose meter is not a substitute for CBG measurement, but can be used as a supplement to CBG measurement. However, the device takes two to three hours to warm up; An invasive glucometer is still required to calibrate the device; And more importantly, the product was withdrawn from the market in the early 2000s due to reports that reverse ion electro osmosis can irritate the skin. Then, a team of researchers from the University of California, San Diego’s Nano scale engineering department developed an ionic electro osmosis platform, initially known as a flexible temporary tattoo sensor. The electrode used for reversed-phase ion electro osmosis and the glucose biosensor electrodes are made of silk screen printing. This conceptual platform addresses several issues with the GlucoWatch Biographer. First, by reducing the applied ionic electro osmotic current and glucose detection potential, the skin irritation caused by reversed-phase ionic electro osmotic is reduced. Second, the disposable silk-screen tattoos have lowered the price of the equipment. Finally, it is easy to attach to the surface of the skin without interfering with the
414
Y. Du
wearer’s movements. The device has been successfully demonstrated to demonstrate the potential of disposable glucose sensing platforms based on ionic electro osmosis for wearable devices. However, the device lacks electronic integration and needs to be validated for long-term continuous monitoring applications. The other is a sugar sensor buried under the skin, which is used in conjunction with an insulin pump for continuous subcutaneous infusion of insulin. Tamada [18] reported that in 1995, using the low current through the skin and blood sugar levels measured by glucose values have high correlation, the principle of it is the energy of the low current through the skin, subcutaneous salt will be sucked out, and the C1 - and Na + to the positive and negative electrode, will be squeezed out water and glucose, and use of this kind of negative ion analysis of subcutaneous tissue fluid glucose concentration and blood sugar levels, can be repeated and continuous inspection. In 1997, Bantle et al. [19] compared the subcutaneous tissue fluid, micro vascular blood and venous plasma glucose values of the forearm of 17 patients with type 1 diabetes, and the correlation coefficient (r) was 0.95, and the absolute difference was 21 mg/dl, indicating that the measurement of glucose value of subcutaneous tissue fluid had great potential for development. There are three reasons for CGMS to replace blood glucose with interstitial glucose: 1. A large number of experiments have proved that the glucose concentration in the tissue fluid is equal to or strictly corresponding to the plasma glucose under the steady-state condition. However, in the short time after the intake of high-sugar food or glucose injection, the blood glucose changes faster than the tissue fluid, with a time difference of about 5 min. Generally, there may be a difference between the two within 1.5 h after the meal. 2. Because glucose absorption and metabolism are carried out in cells, the molecular exchange between blood and cells is carried out through capillaries and tissue fluid (or intercellular fluid) as the medium. Therefore, the glucose concentration in the tissue fluid can better represent the real physiological concentration level of the organism, so it is more representative clinically. 3. The glucose content in the blood will change with the changes of the organism. When the blood leaves the body, the glucose value can only be measured at a certain point in the static state. Dynamic blood glucose monitoring is used to detect the changes of blood glucose in the wearer all day long, so the tissue fluid glucose in the body is selected for detection.
References 1. Clark, L.C., Lyons, C.: Electrode systems for continuous monitoring in cardinovascular surgery. Ann. N. Y. Acad. Sci. 102, 29–45 (1962) 2. Updike, S.J., Hicks, G.P.: The enzyme eletrode. Nature 214, 986–988 (1967) 3. CLARK, L.C.: Reduction of iron content in bleaching fibrous cellulose. US Patent, 3 539 445, Nov. 1970
A Brief Analysis of Wearable Electronic Medical Devices
415
4. Guilbault, G.G., Lubrano, G.J.: An: enzyme electrode for the amperometric determination of glucose. Anal. Chim. Acta 64, 439–455 (1972) 5. Feng, D.: Research status and development direction of biosensors. Shandong Sci. 12(4), 1– 6 (1999). (in Chinese) 6. Cass, A.E.G., et al.: Application of tetrathiafulvalenes in bioelectrochemical processes. Anal. Chem. 56, 667 (1984) 7. Hale, P.D., Inagaki, T., Lee, S.H., et al.: Amperometric glycolate sensors based onglycolate oxidase and polymeric electron transfer mediators. Anal. Chim. Acta 228, 31–37 (1990) 8. Guo, L., Sun, C., Gao, Q.: Study on the modification of platinum with tetrafluvene as glucose sensor. J. Northeast Norm. Univ. (Nat. Sci. Ed.) 2, 57–59 (1994). (in Chinese) 9. Wang, J., Ma, J., He, B.: Polymer materials in bioelectrochemical sensors. Polym. Bull. 2, 77–81 (1999). (in Chinese) 10. Paul, D.H., Leonid, I.B., Toru, I., et al.: Amperometric glucose biosensors based on redox polymer-mediated electron tranfer. Anal. Chem. 63(3), 677–682 (1991) 11. Zhu, B.-S., Ying, T.-L., Zhang, X.-L., Qi, D.-Y.: B-CDP-1, 1 ‘dimethylferrocene glucose biosensor. J. Shanghai Univ. (Nat. Sci. Ed.) 5(4), 353–356 (1999) 12. Liu, B., Li, Q., Zhang, Z.: The ability of platinized microenzyme electrode to catalyze H2O2 at low polarization potential. J. East China Univ. Sci. Technol. 25(2), 194–197 (1999) 13. Koopal, C.G.J., Feiters, M.C., Nolte, R.J.M., et al.: Glucose sensor utilizing polypyrrole incorporated in tract2etch membranes as the mediator. Biosens Bioeletron. 29, 159 (1992) 14. Koopal, C.G.J., Feiters, M.C., Nolte, R.J.M., et al.: The third generation amperometric biosensor for glucose polypyrrole deposited within a matrix of uniform latex particles as mediator. J Bioeletrochem Bioenerg. 7, 461 (1992) 15. Zhang, G., Pan, X., Kan, J.: Acta Physicochemical Sinica, 19(6), 533–537 (2003). (in Chinese) 16. Wang, R., Guo, X., Wu, X.-Q., Zhang, Z.-P.: A glucose sensor based on prussian blue (PB) membrane modified platinum electrode. Chem. Res. Appl. 13(4), 380–382 (2001) 17. Liu, H.-M., Liu, C.-X., Jiang, L.-Y., Liu, J., Yang, Q., Guo, Z.-H., Wang, L., Cai, X.-X.: Preparation and response characteristics of osmium polymer modified low concentration glucose sensor. J. Sens. Technol. 21(2), 215–218 (2008) 18. Fermi, S., Jovanovic, L., Tamada, J.A., et al.: Noninvasive glucose monitoring: comprehensive clinical results. Cygnus Research Team. JAMA J. Am. Med. Assoc. 282(19), 1839– 1844 (1999) 19. Gao, C.: Progress in noninvasive detection of human glucose. Contemp. Med. 15(13), 20–22 (2009)
Review and Prospect of Text Analysis Based on Deep Learning and Its Application in Macroeconomic Forecasting Yao Chen1,2(&) 1
School of Statistics and Mathematics, Zhongnan University of Economics and Law, Wuhan, Hubei Province, China [email protected] 2 School of Mathematics and Statistics, Hubei Minzu University, Enshi, Hubei Province, China
Abstract. Today machine learning is applied in economy, finance and other aspects. Text information has real-time and high values, and is widely used in the emotion analysis and prediction. This article reviews research papers which analyze machine learning and deep learning, and summarizes the application of text analysis in macroeconomic prediction. Finally, it puts forward the development direction of macroeconomic prediction based on deep learning and text analysis, constructs the overall research framework, and proposes development ideas in the future. Keywords: Text analysis Machine learning Macroeconomic forecasting
Deep learning
1 Introduction With the continuous progress of computer technology and the rapid popularization of the Internet, all kinds of real-time data show geometric growth. Massive text information has the characteristics of real-time, high frequencies and high values. How to quickly and efficiently extract valuable information from the massive and unstructured text information becomes a hot research topic. A large number of scholars at home and abroad have carried out research related to text analysis, mainly focusing on economic, financial and political fields. Text analysis based on machine learning can predict of economic indexes, the measurement of economic policies’ uncertainty, the measurement and prediction of business cycles and the analysis of investor sentiment. It has achieved good results in these applications. This paper makes a detailed literature review of relevant research at home and abroad, reviews and compares different methods, finds out their advantages and disadvantages, and summarizes the role of existing literature in text analysis and macro-economy prediction.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 416–423, 2021. https://doi.org/10.1007/978-981-33-4572-0_61
Review and Prospect of Text Analysis Based on Deep Learning
417
2 Review of Text Analysis Based on Machine Learning At present, many domestic and foreign scholars have studied text analysis using the machine learning method. According to the degree of manual participation in the learning process, machine learning methods include supervised learning, unsupervised learning and semi supervised learning. 2.1
Supervised Learning
Supervised learning is the most common machine learning method. It trains the learning model according to manually labeled training sets and samples to achieve the purpose of correctly classifying the test data. Supervised learning methods include Naive Bayes (NB), k-Nearest Neighbor (KNN), Maximum Entropy (MaxEnt), Decision Tree (DT) and Support Vector Machine (SVM). The commonly used methods are SVM and NB. Pang and co-workers firstly proposed to use the machine learning method in classifying film review texts. Machine learning algorithm has better effect in sentiment analysis than the previous method based on the emotion dictionary [1]. Antweiler and Frank used the NB method to analyze text emotions [2]. The NB method assumes that there is no correlation between different words, which is not real in reality. The SVM method can allow this correlation. The SVM method is mainly used in the financial field. For example, Manela and Moreira used the one hot method to vectorize front page news of The Wall Street Journal, and then used the support vector regression method to construct the implied volatility index [3]. Chen and colleagues used the SVM method to classify the sentiment of posts in Chinese online forum [4]. Jianwang Sun and his team proposed to combine the sentiment dictionary analysis method with the machine learning method, and use the position weights based on characteristic polarity values to calculate the weights, which effectively improves the accuracy of text analysis [5]. Supervised learning method has better accuracy in text analysis than analysis methods based on the sentiment dictionary. However, in the era of big data, labeling for massive data will increase the cost. The prediction performance is also closely related to the quantity and quality of manually labeled training data. When the training data is small or the data is unbalanced, the classification effect will be worse. At the same time, this kind of learning model has poor portability, so different models need to be built for different fields. 2.2
Unsupervised Learning
The unsupervised learning method does not need to label training samples manually. It directly inputs training samples into the computer for model training. The model automatically divides training data into several categories according to the correlation between data. Unsupervised learning is mainly driven by data; it is mainly used for clustering and dimension reduction. The methods include K-means clustering, and Density-Based Spatial Clustering of Applications with Noise (DBSCAN). Ester and others found that the accuracy rate of text analysis through the unsupervised learning method is lower than that of the supervised learning method. Therefore, he proposed the semi supervised learning method to improve analysis accuracy [6].
418
2.3
Y. Chen
Semi Supervised Learning
Semi supervised learning method is a compromise. It only labels a small number of sample data, and determines a large number of unlabeled data as training samples. At the same time, the accuracy of the model is improved. Li and team members constructed a semi supervised integration model based on Bagging, and used the K-means clustering method to selectively label unlabeled data sets, so as to improve the accuracy of labeling [7]. Read and colleagues used the semi supervised sentiment classification algorithm to measure the similarity between words, thus avoiding the limitations of data in the research field and topic, and achieving good results in sentiment analysis [8]. Jin Xiao and others combined the semi supervised learning with cost sensitive learning and multiple classifier technology, and proposed a semi supervised ensemble mode for the selection of cost sensitive target customers (CSSE). It overcomes the problem of unbalanced distribution of sample data categories. At the same time, it uses semi supervised learning to train the model, and finally classifies the model by integrating classifiers. The model has better effect in selection of target customers [9]. The semi supervised machine learning method saves the time and manpower in supervised machine learning to a certain extent, and finds a compromise between the accuracy of text sentiment analysis and the consumption of human and material resources [10]. In machine learning methods, all words are independent, so it cannot model the word sequence well, and does not consider the context of the text. At the same time, the use of manual annotation of text features leads to the increase of cost. The method based on deep learning comes into being under that situation.
3 Review of Text Analysis Based on Deep Learning Deep learning belongs to unsupervised learning. It adopts multiple nonlinear transformation to classify texts. The machine learning method uses the one hot method to represent word information. The word information matrix leads to data sparseness and dimension disasters, and does not consider the relationship between words. The deep neural network model uses the high-dimensional feature training to get the lowdimensional feature matrix, so as to construct the distributed vector of words and get the semantic and syntactic correlation between words. Deep learning does not need to label features manually. It can train and extract corresponding features by itself. It has obvious advantages when dealing with nonlinear and high-dimensional data. It can also involve the structure before and after the text and the context semantics, so the prediction accuracy is higher. The commonly used methods are Deep Neural Network (DNN), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM) and so on. Compared with previous methods, the text analysis method based on deep learning has obvious advantages, and its prediction accuracy is significantly improved. It is also widely used in text analysis. As early as 1986, Hinton used the neural network to train word vectors. DNN processes text classification and semantic analysis by increasing the number of network layers and reducing the number of nodes in each layer of network [11]. Due to the excessive DNN parameters and the neglect of local structure characteristics, Kim
Review and Prospect of Text Analysis Based on Deep Learning
419
proposed the CNN method for the first time [12]. By limiting the number of parameters, adding features of words in the local structure, and nesting in Word2vec, the accuracy of text classification is greatly improved. The RNN model obtains the context information of words through recursive algorithm [13]. Some scholars combine CNN and RNN models to construct the RCNN model for text classification [14]. Chen and colleagues firstly used the CNN model to calculate the investor sentiment of retail investors in China’s stock market. Through comparison, it was found that when the training sample data was 40000, the prediction effects of CNN and SVM were equivalent. But with the increase of the number of training samples, the prediction effect of CNN was relatively better [4]. Yan Cheng and team members combined CNN with RNN, added the attention mechanism into the hierarchical model, and used the dual loop neural network to find the context relationship, which improved the accuracy of Chinese text analysis [15]. The LSTM method is improved based on the RNN method, and the control mechanism is added to the neural unit of the receiver, so as to deal with gradient disappearance and gradient explosion in RNN. The LSTM network is mainly used for time series data. Fu and others used traffic flow data to construct the LSTM and gating cycle unit neural network models to predict short-term traffic flow. The prediction effect is better than that of the ARIMA model [16].
4 The Application of Text Analysis in Macroeconomic Prediction The limitations of machine learning methods are as follows. First, users can’t understand how machine learning algorithms predict, that is, the black box theory. Second, machine learning algorithms can’t provide the credibility and confidence interval of prediction points. Third, the prediction made by machine learning is not supported by economic theories. Thus, the research on the fusion of text data and numerical data is produced. The traditional prediction method based on numerical data and the statistical measurement method is combined with the prediction method based on text data analysis. At the same time, the limitations of machine learning methods and the disadvantages of traditional prediction methods such as low accuracy and poor timeliness can be overcome. Some scholars at home and abroad have made some attempts in these aspects. Schumaker and Chen used financial news and data of current stock price to predict the stock price after the report was published [17]. The same direction accuracy of the forecast price and the future price is 57.1%, and the highest return rate of simulation transaction is 2.06%. Taoxiong Liu and others studied the effect of Internet search behaviors on macroeconomic prediction [18]. The results show that the prediction effect is not good if it only relies on Internet search behaviors. On the basis of using government statistical data to forecast, adding Internet search behaviors can significantly improve the prediction accuracy by 39% on average compared with other models in this paper. Cunjie Lin and team members proposed an improved Argo model, [19] adding the influence of time series into the network search data, and adding the dynamic information of the Center for Disease Control (CDC) of the United States and seasonal factors of suspected cases of influenza (ILI). They used historical
420
Y. Chen
information to predict the trend of influenza, thus improving the accuracy of prediction. Akita and others [20] used paragraph vector technology (PV) and LSTM to convert newspaper articles into vectors, standardized stock prices of 10 companies into vectors, and then combined the two groups of vectors into distributed representation. It is found that the revenue predicted by this distributed representation method is significantly higher than other forecasts. The LSTM method can effectively capture the time series impact of input data. Yingmei Xu and others used the search heat index of relevant keywords to lead the prediction of consumer price indexes (CPI), and constructed CPI public opinion indexes which can help to improve the accuracy and timeliness of CPI prediction [21]. Xu and Cohen proposed a complex generation model (StockNet) to study the high randomness of the market and the time dependence of future stock prices, and used text data and stock price information to predict the trend of stock price [22]. In this paper, the text data of the stock of Twitter and the historical prices of 88 stocks of Yahoo Finance are used for simulation. The prediction accuracy of StockNet model reaches 58.23%, which is better than other historical methods. Liu and Wang put forward a number based attention (NBA) method [23]. It is found that the prediction accuracy and noise control of the model with digital attention are improved to varying degrees; the accuracy rate increased by 6.04%. Junhao Zhao and colleagues constructed a macroeconomic prediction model (SA-LSTM) integrating micro blog sentiment analysis and deep learning [24], and used traditional statistical data and micro blog text data to predict the total fixed investment. Among diversified models, the SA-LSTM model has smaller relative errors and better generalization abilities. The above-mentioned literature has already involved the macroeconomic prediction integrated with text data, but this integration is not comprehensive and does not constitute a system framework. The macroeconomic forecast based on text data integration should be supported by a reliable theoretical system; data integration should be the fusion of network text data, statistical survey data from government statistical departments, and online quantitative data. The analysis method should also be extended to the combination of traditional statistical methods and deep learning methods to build a more reliable prediction model, so as to make the macroeconomic forecast more accurate and timely.
5 Prospect of Integrating Text Analysis into Macroeconomic Forecasting The research of text data is helpful to the prediction of macroeconomic indicators; with the improvement of methods, text analysis can improve the accuracy of macroeconomic prediction. To improve the timeliness and accuracy of macroeconomic forecasting, the research idea based on machine learning and text analysis is put forward. To realize the integration of text data with other numerical data for macroeconomic prediction, the basic ideas are as follows. The first is to construct the theoretical analysis framework of behavior prediction. The framework should integrate text data into the economic prediction model, reveal the predictive value of text data through frontier research methods, study the correlation
Review and Prospect of Text Analysis Based on Deep Learning
421
mechanism between the potential motivation behind text information and the behavior of subjects of economic activities, find the theoretical basis which support behavior changes, and provide theoretical supports for subsequent models. The second is to build a comprehensive database (data fusion). This paper studies the relationship among the characteristics of subjects of economic activities and the changes of main indicators in the macro-economic field, in order to determine the appropriate source of text data, and formulate standards and norms for text data extraction. According to the needs of research problems, it collects numerical data from data providers, government departments, statistical network stations, and so on. A complete database integrating text data (in advance) and numerical data (in and after the event) is established to provide data support for subsequent research. The third is to establish a model for prediction (method fusion). On the basis of theoretical support and data support, this paper explores data fusion methods, integrates data from multiple sources, constructs models (including the mixing model and the dynamic factor model), and tests the model through empirical researches. The overall framework of macroeconomic forecast integrated with text data should include three parts: the text data forecasting theory, comprehensive database, as well as the model prediction and application (as shown in Fig. 1).
Fig. 1. Framework of macroeconomic forecasting based on text data integration
6 Conclusion This paper analyzes the development path of text analysis and the characteristics of each classification method. Text analysis has been widely used in many fields such as economy and finance, especially in the prediction of economic indicators, the measurement of economic policies’ uncertainty, the measurement and prediction of
422
Y. Chen
business cycle, and the analysis of investor sentiment. With the continuous improvement and innovation of methods, the prediction accuracy becomes higher and higher. In recent years, the forecasting methods based on double information sources (text data and numerical data) have brought us the idea of “double fusion” (data fusion and method fusion). Based on this assumption, we put forward the overall framework of study, hoping to overcome some shortcomings of existing research through the guidance of this framework, and constantly improve the accuracy and timeliness of macroeconomic forecast under the support of reasonable economic theories.
References 1. Pang, B., Lee, L., Vaithyanathan, S.: Thumbs up? Sentiment classification using machine learning techniques. In: Conference on Empirical Methods in Natural Language Processing, pp. 10–15, June 2002 2. Antweiler, W., Frank, M.Z.: Is all that talk just noise? The information content of internet stock message boards. J. Financ. 59(3), 9–12 (2004) 3. Manela, A., Moreira, A.: News implied volatility and disaster concerns. J. Financ. Econ. 123 (1), 19–29 (2017) 4. Chen, Y., Huang, Z., Li, J., et. al.: Can text-based investor sentiment help understand Chinese stock market? A deep learning method. Working Paper, no. 11, pp. 45–51 (2004) 5. Sun, J.W., Lv, X.Q., Zhang, L.H.: Sentiment analysis of Chinese microblog based on dictionary and machine learning. Comput. Appl. Softw. 31(7), 177–182 (2014) 6. Ester, M., Kriegel, H.P., Xu, X.: A density-based algorithm for discovering clusters in large spatial databases with noise. In: International Conference on Knowledge Discovery and Data Mining, pp. 11–21, April 1996 7. Li, Y.Y., Su, L., Chen, J., et al.: Semi-supervised learning for question classification in CQA. Nat. Comput. 16(4), 567–577 (2016) 8. Read, J., Carroll, J.: Weakly supervised techniques for domain-independent sentiment classification. In: Proceedings of the 1st International CIKM Workshop on Topic-sentiment Analysis for Mass Opinion, pp. 21–27, June 2009 9. Xiao, J., Liu, X.X., Xie, L., et al.: Research on semi supervised integrated model of cost sensitive target customer selection. China Manag. Sci. 26(11), 189–199 (2018) 10. Titov, I.: Domain adaptation by constraining inter-domain variability of latent feature representation. In: The Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 19–24, June 2011 11. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313, 57–86 (2006) 12. Kim, S.H., Kim, D.: Investor sentiment from internet message postings and the predictability of stock returns. J. Econ. Behav. Organ. 107, 17–22 (2014) 13. Elman, J.L.: Finding structure in time. Cogn. Sci. 14(2), 33–36 (1990) 14. Lai, S., Xu, L., Liu, K., et al.: Recurrent convolutional neural networks for text classification. In: Twenty-Ninth AAAI Conference on Artificial Intelligence, pp. 121–123, July 2015 15. Cheng, Y., Ye, Z.M., Wang, M.M., et al.: Sentiment orientation analysis of Chinese text based on convolutional neural network and hierarchical attention network. Acta Sinica Sinica 33(01), 37–55 (2019) 16. Fu, R., Zhang, Z., Li, L.: Using LSTM and GRU neural network methods for traffic flow prediction. In: 2016 31st Youth Academic Annual Conference of Chinese Association of Automation, pp. 56–63, 2016
Review and Prospect of Text Analysis Based on Deep Learning
423
17. Schumaker, R.P., Chen, H.: Textual analysis of stock market prediction using breaking financial news: the AZF in text system. ACM Trans. Inf. Syst. (TOIS) 27(2), 11–15 (2009) 18. Liu, X.T., Xu, X.F.: Can internet search behavior help us predict macro-economy? Econ. Res. 50(12), 68–83 (2015) 19. Lin, C.J., Li, Y.: Big data analysis still needs statistical thinking – taking Argo model as an example. Stat. Res. 33(11), 109–112 (2016) 20. Akita, R., Yoshihara, A., Matsubara, T., et al.: Deep learning for stock prediction using numerical and textual information. In: 2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS), pp. 27–35, August 2016 21. Xu, Y.M., Gao, Y.M.: Construction and application of CPI public opinion index based on internet big data – taking Baidu index as an example. J. Quant. Tech. Econ. 034(001), 94– 112 (2017) 22. Xu, Y., Cohen, S.B.: Stock movement prediction from tweets and historical prices. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pp. 34–36, June 2018 23. Liu, G., Wang, X.: A numerical-based attention method for stock market prediction with dual information. IEEE Access 7, 7357–7367 (2019) 24. Zhao, J.H., Li, Y.H., Huo, L., et al.: Macroeconomic prediction method integrating micro blog sentiment analysis and deep learning. Comput. Appl. 38(11), 11–16 (2018)
The Research on the Construction of the “Online + Offline” Hybrid Teaching Mode of College English Under “Internet +” Background Ruili Chen(&) Department of College English Teaching, Zaozhuang University, Zaozhuang, Shandong, China [email protected]
Abstract. Driven by the technology of network and information, especially the concept of “Internet +”, it has had a great impact on English teaching of college. The traditional classroom-based teaching model cannot meet the social development basic needs at the current stage. Under the “Internet +” background, the “online + offline” college English hybrid teaching model has ushered in new development opportunities. Based on actual teaching experience, the author analyzes the chance and challenges faced in college English teaching at the current stage, and proposes the construction of a “online + offline” hybrid teaching model for college English under the “Internet +” background, in that to improve the effect of English teaching in college positive effects. Keywords: “Internet +” Teaching mode
College English Online + Offline Mixed
1 Introduction The “online + offline” hybrid learning theory was first proposed by Michael Power. This teaching mode uses traditional E-Learning to learn from asynchronous learning technology and the simultaneous improvement of higher education. The “online + offline” hybrid college English teaching has greatly improved the initiative of students to meet the students basic demand of individualization and promote the healthy development of higher English education in China. The traditional teaching model, the “online + offline” blended learning theory has changed the teachers and students roles, from the traditional teaching leader to assistants and participants in students’ learning, and students become the whole English The main body of the learning process. At the meantime, the use of hybrid teaching methods can also repeatedly check the knowledge points that are not firmly grasped, use the advantages of platform resources to broaden the knowledge, select appropriate teaching materials and materials, and use information technology to timely check and fill vacancies to achieve the teaching quality improvement and the students’ overall quality improvement [1].
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 424–429, 2021. https://doi.org/10.1007/978-981-33-4572-0_62
The Research on the Construction of the “Online + Offline” Hybrid Teaching Mode
425
2 Opportunities and Challenges Faced by English Teaching Under Background of “Internet +” Under the “Internet +” background, the way people obtain information and knowledge has undergone great changes, and the English teaching model is facing new chances and challenges. In the 2015 government work report, Premier Li Keqiang first forward the concept of “Internet +”, take the advantages of Internet technology in English teaching to get a major change in traditional English teaching models and concepts. Under the “Internet +” model, English teaching is no more limited to classrooms and teaching materials, and are not limited in time and space. Teachers can take fully advantages of the Internet platform to promote the transformation concepts and teaching models of college English teaching [2]. 2.1
Change Traditional Learning Concepts and Methods
Under the information technology revolution, people have the advantage of being more open and convenient to acquire knowledge. Intelligent mobile communication technology based on the Internet has developed rapidly, and college English teaching methods have undergone major changes. Independent and personalized learning and Group cooperative learning has ushered in a broad space for development [3]. Textbooks and classroom professors are no longer the one way for students to obtain information, and English learning has become more “online + offline”. The Internet provides a huge number of information resources for English learning, and the learning mode is more diverse. The Internet and communication technologies have built a broad information platform for teachers and students, making them more autonomous in learning time, place and content, and students can freely choose learning methods and learning objects. The “Internet +” teaching model relies on the limitations of traditional teaching model, further broadens the learning path, changes the state of students passively receiving knowledge, and enhances students’ enthusiasm and subjective initiative in English learning [3]. 2.2
Challenge Traditional Teaching Concepts and Models
The Internet is based on the concept of serving users and focuses on user experience. Using the advantages of the Internet English teaching in college can optimize the teaching mode, such as flipped classrooms, micro-classes, and MOOCs. “Internet +” English teaching is not a literal online course. This new course model pays more attention to user experience and is an improvement to traditional classroom teaching. The well-built MOOC makes the knowledge highly structured, and the classroom content is rationally re-planned. The curriculum becomes an organic whole composed of a series of relatively independent but logically related knowledge units, knowledge modules, and knowledge points. Students can log on to the network at any time to conduct independent or group learning on the learning content that they are interested in or designated by the teacher, without being restricted by time and space, thus changing the traditional teaching model with teachers as the body, and students become
426
R. Chen
a main body of teaching and learning [4]. Under the new classroom teaching model, teachers can use the resource benefit of information technology to enhance the pertinence and effectiveness of college English teaching. 2.3
Impact Traditional Teaching Activities
In the “Internet +” era, college English teachers must not only have a domain knowledge structure and the ability to control the classroom, but also have a strong level in resource extraction, relied on the advantages of modern technology, and give play to the practical effects of the Internet. The Internet has a wealth of English learning resources. Teachers must have strong information screening capabilities to screen out beneficial learning information and to improve the pertinence and effectiveness of college English teaching. In actual teaching, the development of classroom content should be targeted, taking into the account of basic characteristics and individual needs of students, formulate long-term English teaching goals, stimulate students’ enthusiasm for learning English, strengthen classroom teaching practice, and promote the actual effect of English teaching [5]. Teachers should follow the trend of the times, optimize teaching concepts, improve professional quality, expand teaching ideas, and provide a reliable guarantee for the quality of english teaching in college. Proportion, you can take the benefit of the Internet to make micro-class videos, be proficient in network technology and communication technology software, strengthen communication and interaction with students, and improve the actual effect of online learning. Combine communication technology and traditional teaching mode organically, cultivate students’ logic self-learning ability and expression ability, and effectively improve students’ comprehensive college English level [6].
3 The Strategies for Constructing “Online + Offline” Hybrid Teaching of College English in the Background of “Internet +” 3.1
Online + Offline Hybrid Learning Model Application in Classroom
Under the “Internet +” “online + offline” mixed English teaching mode, it indeed need to strengthen the combination of classroom teaching, online and offline teaching [6]. This mixed teaching mode is more targeted and can be used anytime and anywhere according to the actual situation the online and offline mixed English teaching model is shown in Fig. 1. Teachers shall take the benefit of information technology, through the application of various information-based teaching platforms and resources, such as high-quality open courses, PPT courseware, online English learning platforms, etc., to introduce high-quality learning resources into the curriculum to facilitate students’ speed and convenience to acquire knowledge and expand the scope of English learning [7]. Teachers should also realize that the classroom learning time is relatively limited, and high-quality resources can be pushed to students. Students use mobile devices to learn related resources to improve the effect of English learning. Not only have that, students can also chose online resource courses based on their own foundation and hobbies to improve the pertinence and effectiveness of English learning [7].
The Research on the Construction of the “Online + Offline” Hybrid Teaching Mode
427
Fig. 1. Online and offline mixed english teaching model
3.2
Improve the Evaluation of English Teaching in College Under the “Internet +” Model
Teaching evaluation is in a very important position for college teaching. Under the “Internet +” model, English teaching in college evaluation is mainly carried out from two levels. One is process evaluation. With the effective application of the Internet teaching model, procedural evaluation is mainly online, which mainly includes online video resource browsing, information resource discussion, embedded problems, online homework, unit testing, etc. The second is offline evaluation. According to the characteristics of English teaching, offline evaluation is mainly based on oral and written tests, as shown in Fig. 2. Through the organic integration of the two evaluation modes, the actual evaluation method is arranged according to the specific characteristics of the students. For example, college students have poor self-control ability. In the traditional offline evaluation model, many students are not serious in class and make surprises before exams. In this case, the online evaluation ratio can be appropriately increased, and the results of students watching videos and online tests can be automatically recorded by the system, which increases the time for students to learn English. The development of “Internet +” strengthens the process [8].
428
R. Chen
Fig. 2. Online and offline English teaching evaluation based on the Internet
3.3
Strengthen Students’ Mobile Autonomous Learning Before Class
Teachers should plan well before class, choose textbooks reasonably based on students specific situation, strengthen the extension of relevant knowledge on the key and difficult points of the textbook, use pictures, videos, etc. to expand the knowledge of the key and critical points, and strengthen students’ knowledge of these points The understanding and mastery of, the use of mobile platforms to strengthen the communication and exchanges for teachers and students and between students, so that students are fully prepared, they can choose the time and place of class according to their actual situation, and at the same time independently control the content of learning and progress [9]. In this process, students encounter problems that they do not understand and discuss and communicate on the platform in time. According to different characteristics, they can also study in groups, strengthen the pertinence of medical English teaching, and help strengthen the learning effect of medical English. The enthusiasm of the students and the initiative of the supervisor are greatly improved. Teachers should have a full understanding of the progress of students’ autonomous learning and improve students’ enthusiasm and initiative in autonomous learning. Teachers should make full preparations before class, have a full grasp of every medical English term in teaching, provide students with a good classroom atmosphere, and be sure to answer all questions and respond to requests [10].
The Research on the Construction of the “Online + Offline” Hybrid Teaching Mode
429
4 Conclusion Via the teaching materials and modes, college English are highly dependent on multimedia technology, it is more suitable to use the mixed teaching model for teaching reform. The blended teaching model reorganizes the curriculum structure by combining online and offline teaching methods, which can not only give the advantages to online teaching, but not make online teaching replace offline teaching. Constructing online and offline blended teaching has many advantages in English teaching, which can improve classroom efficiency, change students’ learning habits, and be more conducive to students’ learning.
References 1. Jian, H.: Exploration of college English teaching model based on multimedia and network environment. Sci. Chin. 11(01), 131–134 (2016) (in Chinese) 2. Guo, S.: Research on the “online + offline” interactive teaching model of college English based on the network environment. J. Henan Radio TV Univ. 7(4), 35–38 (2010) (in Chinese) 3. Huang, M.: The construction of an “online + offline” interactive teaching model for college English in the context of “Internet + teaching.” J. Hubei Corresp. Univ. 30(11), 151–152 (2017) (in Chinese) 4. Xu, L.: An analysis of the college English blended teaching model and strategy under the background of “Internet +.” J. Lanzhou Inst. Educ. 12(2), 137–138 (2017) (in Chinese) 5. Qian, W.: Research on the construction of a hybrid teaching model of “Internet +” higher vocational public English. J. Lanzhou Inst. Educ. 33(11), 148–149 (2017) (in Chinese) 6. Peng, M.: Applied research of “flipped classroom” and micro-class teaching in college English reading and writing. J. Hubei Univ. Econ. 14(02), 201–203 (2015) (in Chinese) 7. Li, T., Pang, J.: On the application of online and offline hybrid teaching mode in higher vocational teaching. High. Educ. Forum 12, 63–66 (2017) (in Chinese) 8. Liu, R.: Exploration of college English listening and speaking teaching based on blended teaching. Innov. Educ. Res. 7(3), 292–294 (2019) (in Chinese) 9. Chen, Q.: Research on the college English blended teaching model based on the “Internet + teaching” background. Res. Cult. Innov. Educ. 8(2), 7–10 (2019) (in Chinese) 10. Guo, J., Wang, L.: The construction of a hybrid college English mobile teaching model under the background of “Internet +.” J. Hubei Corresp. Univ. 32(4), 148–149 (2019) (in Chinese)
The Pronouns of the Odyssey and the Statistical Terms of Shan Hai Jing in Computer Search Technique Dinghui Wang1(&), Jun Luo1, and Zhidan Zhou2 1
School of Foreign Languages, Zhaotong University, Zhaotong, Yunnan, China [email protected] 2 School of Communication, Journalism and Marketing, Massey University, Wallace Street Mt Cook, Wellington, New Zealand
Abstract. By means of computer search and retrieval, this paper is going to analyze the ratio of the pronouns in Homer’s epic The Odyssey and the statistical terms in ancient Chinese descriptive poem Shan Hai Jing (usually translated as Classic of Mountains and Rives). With the help of present computer science search technique, we figure out times of repetition of the target pronouns “he, his, him, I, my, me” in The Odyssey and the maximizing of the alternative frequency of statistical terms “names, figures, pronouns, nouns of position etc.” in Shan Hai Jing. The high repetition frequency of the target pronouns emphasizes “his experience and possession”, reflecting the hero experiencer identity. And then the high alternative frequency of the statistical terms emphasizes “my experience and possession”, suggesting the writer experiencer identity. Keywords: Computer search technique Pronouns The Odyssey Statistical terms Shan Hai Jing
1 Introduction The Odyssey revealed traditional awareness, the social consequences of colonial ethnography and the inhumanity to man by deliberately confusing the concept of “private peace” and “world peace” [1]. In fact, the key revenge and reconciliation in the fights among family tribes is power [2]. What attracts readers most is the loss and recapture of hero Odysseus’ power in soul-stirring narration. Shan Hai Jing mainly demonstrated the images of trees, rivers, places, deities etc. [3, 4] in ancient times and the original psychological and philosophical awareness [5, 6]. It is statistical way of narration and description that makes this ancient geographical book attractive to readers of all age. The significance of Shan Hai Jing to ancient Asian culture is just like that of The Odyssey to European culture. In both The Odyssey and Shan Hai Jing, the gods and other supernatural beings took apart in the heroes’ great actions. There are many great men in The Odyssey, but only Odysseus is the main hero. The main hero’s adventures and colorful experiences were demonstrated via pronouns, constituting a unique narrative orientation. It is the © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 430–437, 2021. https://doi.org/10.1007/978-981-33-4572-0_63
The Pronouns of the Odyssey and the Statistical Terms of Shan Hai Jing
431
existence of this narrative orientation that makes the legend close to reality and philosophy. But there is not exact hero in Shan Hai Jing, so what makes the main body and frame of the legends is the aggregation of different statistical terms. In statistical terms, the writer could introduce and describe various creatures, rivers, places, deities and great men in the ancient kingdom. This is also a specific narrative orientation. In different narrative skills, the two ancient classics contain different narrative orientation, indicating reverse concepts of value system. In order to study the narrative orientations in the two classics and provide available data for the study and for future research on the objects, we carry out this research through computer search technique.
2 Method Used in the Research Firstly, this research aims at offering a clear way to study the literary significance of Homer’s epic The Odyssey and the ancient Chinese geographical book Shan Hai Jing in the perspective of key words searching, counting and analysis. Although corpus analysis is generally used in the research fields such as language teaching, linguistics, history, computer science and so on, authoritative scholars also pointed out that text with more than twenty thousand words were not suitable for corpus discourse study [7, 8]. And it is unnecessary to divide The Odyssey into several pieces, besides; one of the main principles of the English context analysis tradition is the complete text [9, 10]. Thus, it is more difficult to search and analyze the words in literary works like The Odyssey and Shan Hai Jing by using corpus analysis. Secondly, the conservative and traditional way of searching, counting and analyzing words by using the look-up function in office document permits the users to delete, revise and mark the content easily to ensure the accuracy of the researched information. What’s more, this function in computer office document is more suitable for retrieving and counting single word, and also is easy to operate and repeat. Hereby, in this research, we used word search technique from Microsoft Office of Windows XP to search and count the frequency of the object words and phrases, and then adjust and revise the data according to the information from manual analysis. We pick out irrelevant words from the computer selected ones according to the context of the paragraph. The research objects were retrieved from the electronic book of the target poem and epic. In order to ensure the readability and authenticity, we choose Shan Hai Jing, edited by Liu Xiang (77 B.C–7B.C) and Liu Xin (50 B.C–23 A.D), and The Odyssey translated by George Herbert Palmer as the objects of the study.
3 Discussion 3.1
The Repetition Frequency (Ratio) of the Pronouns in the Odyssey
He Odyssey contains contents and chapters such as “Homer, the world of Homer and the Odyssey, introduction, notes, translation, comments” [11] and so on. Before the search, we delete needless information and pick out the main body–the story of Odyssey, from Book I to Book XXIV, as our search target. At first, we regard pronouns
432
D. Wang et al.
“they, their, them, we, our, us, she, her, it and its” as other pronouns because these pronouns are rarely related to the main hero Odysseus. In the process of computer searching, 4370 pronouns were searched and selected out. Table 1 shows the details. Table 1. The number of other pronouns: we, our, us, they, their, them, she, her, it & its Other pronouns Count (word) Irrelevant (word) We 396 – Us 222 – Our 250 – They 711 – Their 458 – Them 433 – She 503 – Her 619 – It 695 – Its 83 – Total 4370 –
Next, we mainly searched target pronouns “he, his, him”, “I, my, me” and the key word “Odysseus” and counted the frequency of these words. The computer search technique helped us mark 7223 target pronouns, then in manual process, we read and analyzed whether these pronouns are relevant to the main hero Odysseus or not on the basis of the context and meaning of the text. This time, 1403 pronouns were marked and deleted for they are irrelevant to Odysseus in our manual analysis. As for the key word “Odysseus”, 676 words were recognized and counted under the computer look-up function. Table 2 demonstrates the search and analysis results. Table 2. The number of target pronouns, Odysseus etc. Target pronouns He His Him I My Me Total Other Odysseus
Count (word) Irrelevant (word) Relevant (word) 1581 101 1480 1515 75 1440 894 57 837 1504 158 1346 954 514 440 775 498 277 7223 1403 5820 4370 – – 676 0 676
The Pronouns of the Odyssey and the Statistical Terms of Shan Hai Jing
433
We figured out 11,593 pronouns totally in The Odyssey, including 7223 target pronouns (among 5820 pronouns were relevant) and 4370 other pronouns. The relevant target pronouns accounted for 50.2%. With the addition of 676 key words “Odysseus”, the total of the relevant pronouns and key words is 6496, then the proportion gets increased to 52.9% approximately. Figure 1 shows the results.
other pronouns 47.1 %
target pronouns (Odysseus) 52.9%
Fig. 1. Frequency (ratio) of the target pronouns and “Odysseus”
The results of the research manifests the higher frequency of the hero Odysseus relevant pronouns and key word and that reflects the outstanding identity of Odysseus. In other words, following the hero, readers experienced a series of thrilling and legendary tribulations on his way home. The hero of the epic is the direct experience of the long story. In this way, this epic seems to run out of the prototype of mythology, but the adventure of the hero that happened actually. The considerable repetition frequency of relevant pronouns and the distinct feature of calling the name “Odysseus” of the hero at first stresses “he experienced” and “he possessed”. So who is “he”? Of course, it’s the main character, the hero. 3.2
The Alternative Frequency (Ratio) of the Statistical Terms in Shan Hai Jing
For this part, we will count the quantity and analyze the proportion of the statistical terms in the whole text of Shan Hai Jing. We uploaded the whole original electronic text of the book (without any notes and translations), and then the computer search function figured out eighteen chapters and 26730 Chinese characters in total. Before we began to search and look up the statistical terms (words), we divided these terms into several parts including general nouns and pronouns of the trees, mountains, rivers etc.; characters corresponding to “there be, be” in English; figures (cardinal numbers) and nouns of position, according to the tradition and skills of composing in ancient Chinese narrative writings, as well as the meaning of these characters (words) in ancient Chinese. Then with the help of computer look-up function, we got 2628 general nouns and 2782 pronouns of the trees, mountains, seas, rivers, beasts, birds and places in total. Among these words, 272 nouns and 35 pronouns were recognized as irrelevant to
434
D. Wang et al.
statistical terms. The computer helped us selected and marked 4000 Chinese characters corresponding to English “there be, be”. After we analyzed these marked characters (words), we found that 67 of them were irrelevant. The details of the search results were in the following, Table 3. Table 3. The quantity of target Chinese characters: general nouns & pronouns, there be, is/are. Target words Shan (mountain) Mu (tree) Hai (sea) Shui (river) Shou (beast) Niao (bird) Guo (kingdom) Total (nouns) Qi (its/that) Zhi (its/this) Ci (this) Total (pronouns) Sheng (accrue/there be) You (there be) Duo (there be + much/many) Yu (there be + at/in) Zai (there be + at/in) Total (there be meaning) Yue (called/be) Shi (be) Wei (be) Ming (named/be) Total (be)
Count (word) Irrelevant (word) Relevant (word) 926 43 883 311 139 172 193 67 126 552 7 545 191 7 184 275 2 273 180 7 173 2628 272 2356 1599 7 1592 1135 28 1107 48 0 48 2782 35 2747 111 1 110 754 29 725 1036 11 1025 396 4 392 203 0 203 2500 45 2455 865 2 863 166 4 162 135 13 122 334 3 331 1500 22 1478
And then we continued to search the figures (cardinal numbers) and nouns of position. In Chinese, the number “ten” was hardly used alone in statistical cases, thus we didn’t take it for consideration here in this process, but we did add the Chinese characters “Li” and “You” [12] which refers to respectively the English words “kilometer” and “also/as well”. We took out 115 irrelevant characters (word) from 2212 selected words of the computer search in manual analysis on the basis of the context, meaning, usage of the words (Table 4). As for the nouns of position, in addition to common expression, we take two pairs of antonyms into consideration: “Chu–Ru” and “Yin–Yang” [12], equivalent to English prepositions “out–in” and “back–front”. This time, the computer marked 2626 words, but 27 of these words were irrelevant to the statistical terms (also see Table 4).
The Pronouns of the Odyssey and the Statistical Terms of Shan Hai Jing
435
Table 4. The quantity of target Chinese characters: figures, nouns of position Target words Count (word) Irrelevant (word) Relevant (word) Yi (one) 241 18 223 Er (two) 175 6 169 San (three) 254 3 251 Si (four) 145 24 121 Wu (five) 189 8 181 Liu (six) 60 5 55 Qi (seven) 67 3 64 Ba (eight) 75 9 66 Jiu (nine) 73 39 34 Li (kilometer) 495 0 495 You (and/as well) 438 0 438 Total (figures etc.) 2212 115 2097 Dong (east) 494 2 492 Nan (south) 321 2 319 Xi (west) 302 2 300 Bei (north) 369 3 366 Shang (up/on…) 288 9 279 Xia (under/below…) 197 2 195 Chu (out/off…) 351 2 349 Ru (in/into… 65 4 61 Yin (back) 104 0 104 Yang (front) 135 1 134 Total (nouns of position) 2626 27 2599
Among the whole 26730 Chinese characters, 13732 target words, the total of the relevant general nouns and pronouns, “there be, be” corresponding characters (words), figures and nouns of position, were searched out and counted. It is clear that the statistical terms account for approximately 51.4% (Fig. 2) in the whole text of Shan Hai Jing. The high frequency of the statistical terms stands out the unique perspective of composing and narration in ancient Chinese writings: stressing the writer identity. This is a specific narrative orientation stressing “I experienced” or “I possessed” to the readers. Who am “I”? Of course, it’s the writer, the invisible writer. There is an invisible writer introducing or telling the audience different creatures and the legends like a tour guide as if he has experienced or possessed all. There is strong host and guest relationship in Shan Hai Jing. With plenty of statistical terms, the host (or writer) shows the readers the creatures which once lived in the vast land, the gorgeous mountains and seas that contributed to and cultivated a great nation, and revealing the legends and history of an ancient empire.
436
D. Wang et al.
other words 48.6%
statistical terms 51.4%
Fig. 2. Frequency (ratio) of the statistical terms (words)
3.3
Summary
Through the computer search technique, we counted the quantity and figured out the ratio of the target pronouns and key word “Odysseus” in The Odyssey, and the statistical terms in Shan Hai Jing. The pronouns and key word Odysseus in Homer’s epic The Odyssey account for high frequency (ratio) of about 52.9%. And the statistical terms (words) in ancient Chinese descriptive geographical book Shan Hai Jing take a high proportion of bout 51.4%. High ratio of pronouns and Odysseus indicates the hero experiencer identity, this kind of narrative orientation stress “his experience” and “his possession” as well. On the contrary, the high ratio of statistical terms reflects the writer experiencer identity, stressing “my experience” and “my possession”.
4 Conclusion This research managed to count and analyze the repetition times of the pronouns and statistical terms in two different but contemporary ancient works. This research is an attempt to study literature through computer technology in order to collect data related to the book, adding more trials for the use of computer search techniques and more information in literature interpretation, as well. So far, there is still vast space for scholars to use leading and popular computer research techniques, for instance, the Sketch Engine, the Wmatrix corpus analysis techniques to study literature. Thus future research can take these computer techniques into consideration and get more results in depth. Acknowledgments. This paper is the achievement of the Research Project (2019J1137), subsidized by Education Department of Yunnan Province.
The Pronouns of the Odyssey and the Statistical Terms of Shan Hai Jing
437
References 1. Giangrande, L.: Pseudo-, “international,” olympian and personal peace in homeric epic. Class. J. 68(1), 1–10 (1972) 2. Donlan, W.: Kin-groups in the homeric epics. Class. Wkly 101(1), 1–39 (2007) 3. Fracasso, R.: Libro dei monti e dei mari (Shanhai jing): Cosmografia e mitologia nella Cina Antica. Marsilio, Venice (1996) 4. Mathieu, R.: Etude sur la mythologie et l’ethnologie de la Chine Ancienne. College de France, Institut des hautes etudes Chinoises 1, 45–67 (1983) 5. Davydov, A.: “Shan Hai Jing” and “I Ching” Map of Human Psychophysiological Structure? . SPSU Publishing House, St. Petersburg (2013) 6. Fedoruk, V.: Is Shan Hai Jing The Original Catalog of Psychophysiological Human Structure? Philosophy and Human Problem. SPSU Publishing House, St. Petersburg (2014) 7. Sinclair, J.: Corpus, Concordance, Collocation. Oxford University Press, Oxford (1991) 8. Stubbs, M.: British traditions in text analysis: from firth to sinclair. In: Baker, M., et al. (eds.) Text and Technology. John Benjamins Publishing Company, Philadelphia (1993) 9. Halliday, M.A.K., Hasan, R.: Languages, Context and Text: Aspects of Language in a SociuSem Iotic Perspective. Deal University, Vic (1985) 10. Swales, J.: Genre Analysis. Cambridge University Press, Cambridge (1990) 11. Homer: The Odyssey, translated by George Herbert Palmer, edited with An Introductions and Notes by Robert Squillace. Barnes & Noble Books, New York (2003) 12. Liu, X., Liu, X. (ed.): Shan Hai Jing. Kindle electronic edition, Green Apple Data Center, 5 July 2020
The Data Limitations of Artificial Intelligence Algorithms and the Political Ethics Problems Caused by it Kefei Zhang(&) School of Humanities, Tongji University, Shanghai, China [email protected]
Abstract. Artificial intelligence algorithms rely on big data, which is large enough in quantity and wide enough in scope. But the “big” data is only a relative concept; big data is still incomplete in some specific levels. If the limitation of data is strengthened and amplified by artificial intelligence algorithms, and elevated to the social level, a series of political and ethical issues will occur. Keywords: Data limitations
Incompleteness Political ethics
1 Introduction Generally speaking, artificial intelligence algorithms rely on their excellent ability to process data, and are often more systematic and effective than simple algorithms which derive results from traditional databases. But participants must ensure the integrity of original data during the collection process, which is the most important prerequisite for an algorithm to achieve relatively objective results. Otherwise, the calculation will start from relatively incomplete data. Under that situation, the conclusion will be less objective even if the calculation process is impeccable. The widespread use of artificial intelligence at the social level is actually a doubleedged sword [1]. If we can ensure that the big data is true and effective, artificial intelligence can indeed promote the integration of resources and rapid social development, and help us create a fairer and more harmonious social lifestyle. If not, the data layer we used may have its own problems. Big data may not give the optimal solution, and may even cause social contradictions under that situation, which may lead to a series of political and ethical issues that against justice.
2 The Data of Artificial Intelligence Algorithms is Incomplete People rarely believe that big data also have the problem of data incompleteness, since artificial intelligence algorithms often emphasize that big data should be large enough in quantity and wide enough in scope. But the incomplete big data is not a hypothesis. The limitations of big data have already manifested in current society. If we analyze the © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 438–443, 2021. https://doi.org/10.1007/978-981-33-4572-0_64
The Data Limitations of Artificial Intelligence Algorithms and the Political Ethics
439
features of big data, it can be found that the data used in artificial intelligence algorithms are restricted by some factors; they cannot be complete under these limitations. 2.1
The Data is not Complete in the Original Sense Due to the Collection End
The incompleteness of original data at the collection end is determined by the collection method. Different from the traditional data model, big data collection does not rely on active surveys; they are passive data recording participants’ behaviors. In theory, every member of the society can generate and contribute data if he participates in digital activities with the help of digital equipment. Unfortunately, this ideal social form does not exist. In reality, not every member of the society can participate and share big data [2]. Globally, some countries are blocked by the “digital divide”; their digital age is no coming. Within the same society, due to economic, cultural and generational differences, members are not equally engaged in digital activities. Economically disadvantaged groups, people with low education levels, and elderly people who are relatively insensitive to digital devices do not actively participate in the intelligent society. They are even shielded from big data [3]. Data involving them cannot be effectively transmitted to the collection end, resulting in incomplete big data. As far as the society is concerned, the incompleteness of data at the collection end means, big data cannot fully represent the interests of all members of the society; artificial intelligence cannot overcome this problem through algorithm models. 2.2
The Algorithm Model Causes Incomplete Data at the Usage Level
After the artificial intelligence algorithm is initially established, it can be free from human intervention in the subsequent data learning process and has the objectivity. But when it is established, the algorithm model is affected by the programmer’s value orientation. It is in this sense that Nature once used “Bias in, Blas Out” to describe [4]. In the follow-up learning and training process, the entire learning process of the artificial intelligence seems to be independent. But it has actually been marked with artificial value orientation bias. This preconception determines that the artificial intelligence algorithm only selects data according to needs. In fact, it is impossible to use all the characteristics of data originally collected. Artificial intelligence surpasses humans in terms of logical computing capabilities; but in terms of value judgment, it is always under the leadership of humans [5]. Artificial intelligence algorithms only extract data that is effective to the algorithm’s regulations, that is, data of a specific dimension, but not data of all dimensions. The data loses a certain sense of integrity. 2.3
The Data Does not Have Fidelity Integrity Due to the Interaction Process
After the algorithm is applied, it needs to deal with a large number of users’ needs all the time, and provide users with reasonable solutions according to the actual situation. In the application process, the algorithm is constantly revising itself according to data
440
K. Zhang
received, in order to conform to the actual situation. Therefore, in essence, the application end is an extension of the learning end. Due to the different values of users, the applicable scenarios are also complicated. The direction of algorithm modification is difficult to predict. In this process, no data is the center. The authenticity of data is diluted by the mutual influence between each other. This model reduces the impact of individual invalid data, but it also weakens its value and significance as valid data. In the extreme state, if the entire algorithm model is affected by a lot of false data in the database, the real data will not be able to play its role effectively. 2.4
Loss of Specific Scenarios Results in Incomplete Data in Meaning
Many data have specific meaning in a specific environment. Artificial intelligence algorithms cannot take the meaning of data in all scenarios into account even they have the super computing power provided by supercomputers. In fact, data that needs to be analyzed in specific situations can only be logically classified by artificial intelligence into several different types of scenes in big data operations. It is destined that some data that artificial intelligence algorithms rely on has the abstract nature of losing the background of special occasions. More importantly, in the logically simplified scenario, all data is finally reduced to some numerical indicators, further losing the specific meaning in the specific environment.
3 Political and Ethical Issues Arising from This If the problem can be regarded as a mathematical problem that needs to be solved, the algorithm can be regarded as the equation to solve the problem. The input data goes through a series of solutions to obtain the output result [6]. Such an explanation can indeed help us to understand the abstract relationship between artificial intelligence algorithms and data. However, once artificial intelligence is used to solve social problems, it must consider the social meaning behind abstract data. In the actual development process, artificial intelligence focuses on the improvement of algorithms and ignores the limitations of processed data, inevitably triggering a series of related political and ethical problems. 3.1
Corresponding Individual Rights May Be Forgotten Easily Due to the Incompleteness of Data at the Original Collection End
If there is a problem of data shielding due to reasons such as the digital divide on the data collection side, it means that the needs and wishes of the corresponding disadvantaged groups are not reflected in the final calculation results in the form of any data [3]. Any social activity dominated by artificial intelligence algorithms is destined to disregard the social interests of those who are already disadvantaged. Because even if the interests of these people are violated, they cannot be expressed in the results through the artificial intelligence algorithm system. Once the government happens to be willing to arrange public services based on the results of big data statistics, the rights of
The Data Limitations of Artificial Intelligence Algorithms and the Political Ethics
441
these people will often be forgotten just like the data of these people. For example, in business activities, the online discount model excludes the rights of people who do not use digital tools to shop from the beginning. During the fight against the epidemic, when the government used the statistical results of big data on the Internet to understand public opinion, urban residents who are better at using electronic products expressed more demands than remote rural residents. The rights of those people are forgotten because it is difficult to load data. What’s more worth mentioning is that, temporarily, they are unable to express their demands through the data system; in the long run, their damage cannot be measured effectively. If the mainstream society believes that the big data processed by artificial intelligence computing power can represent everything, they may be forgotten permanently. Their rights will always be damaged. 3.2
The Incompleteness of Data Usage Can Easily Form the Monopolistic Benefits of Interest Groups
The limited use of data caused by the algorithm model is quite similar to the way that a specific interest group defends for himself. In order to get benefits, interest groups will use evidence that is beneficial to them, and ignore evidence that has nothing to do with their own interests [7]. If the algorithm model is built in accordance with the intention of a specific interest group, they will use the algorithm and big data to defend their own interests in a more deceptive way. The combination of capital and artificial intelligence may form a monopolistic business model, which is more worthy of attention. The design, training and iterative process of artificial intelligence require a lot of manpower and material resources; not everyone has the ability to get involved. Therefore, good algorithms are owned by powerful companies or institutions. In market competition, large companies will use their own algorithms to identify and crack down on competitors, especially weaker ones, thereby forming industry monopolies. 3.3
Incompleteness in the Sense of Data Fidelity Deprives the Inherent Meaning of Truth
The problem of data distortion caused by interaction can easily obliterate the public’s original strong desire to pursue the truth, making public opinion more easily manipulated by technology. In traditional society, the transmission of public opinion is a linear transmission process like dominoes, but artificial intelligence algorithms based on big data have basically subverted this model. A large number of words can be expressed at the same time in an instant, and visualized into digital conclusions in the form of big data statistics, forcing people to believe that digital statistics represent the truth. Therefore, the cognition of the truth of the matter is no longer carried out in a cautious way of verifying, but is transformed into a state of surrender to the hegemony of discourse that is dominant in quantity. Among them, the network navy has the most handy understanding and control of this effect. By planting a large number of false statements to dilute the expression that was originally about a certain truth, in the end,
442
K. Zhang
as long as the number of rumors is sufficient, it becomes easier to spread than the truth. Interest in the truth is generally weakened, which means that online public opinion will be easier to manipulate. 3.4
Data Loses Its Incompleteness in the Sense of Specific Scenarios, Which Can Easily Lead to Technical Bureaucracy
Some data obviously loses the meaning of the scene, or is charged into the calculation process. Obviously, it no longer means the optimal calculation result. The whole process of artificial intelligence shows a black box mode in this sense [8]. However, because the program cannot be refuted, no individual has the ability to participate in the inspection and correction of artificial intelligence calculation results. Once applied to public decision-making and administrative management, it will inevitably lead to a new bureaucratic system supported by technology. Under this new bureaucratic system, individualized protests will become even weaker.
4 The General Direction to Solve the Problem 4.1
Respect the Principle of Justice and Ensure the Interests of Disadvantaged Groups Ignored by Big Data
Technological development will not lead to equal benefits for all. We cannot give up technological development and hinder social progress because a small number of people may become disadvantaged, or allow the damage of interests of a small number of people in order to promote the overall progress of society [9]. The government must assume the necessary responsibility for caring for vulnerable groups. On the one hand, the government should provide necessary counseling to help vulnerable groups access to the big data system; on the other hand, necessary service windows should be kept to ensure that people who are really unable to access the big data system can protect their basic rights and interests. 4.2
Adhere to the Principle of Fairness and Eliminate the Monopoly of Interests Based on Technology
The development of artificial intelligence programs should be restricted by a series of principles to ensure fairness. The artificial intelligence platform used by commercial companies should also be restricted by antitrust laws. The combination of technology and capital that attempts to exclude competitors must be avoided, so as to ensure that the development of these technical programs does not violate business ethics. 4.3
Pay Attention to Facts and Prevent Artificial Intelligence from Being Manipulated by Public Opinion
The entire society is a macroscopic digital system; less garbage data is beneficial to social orders and overall interests. An accountability system should be established for online opinions. When a data provider claims to tell the fact, he should ensure that what
The Data Limitations of Artificial Intelligence Algorithms and the Political Ethics
443
he said is the truth. Otherwise he should be responsible for misleading the entire data system and causing confusion. 4.4
Accept Supervision from Public Opinion and Prevent Data Bureaucracy
The government should not be satisfied with the promotion of administrative efficiency brought by new technologies. They should be aware that new forms of bureaucracy may come into being in the data processing system [10]. While introducing big data to improve the efficiency in administrative mechanism reform, authorities should reflect on it to prevent the risk of being misled. We must be wary of the risk that this new administrative method that relies on artificial intelligence and big data may be out of touch with the real society. No matter how technology develops, people are fundamental. The authority should go to the grassroots, listen to public opinion, and carry out reform in a timely manner once problems are identified.
References 1. Castells, M.: The Power of Identity, p. 354. Blackwell, Oxford (2003) 2. Goodman, B.: Discrimination, data sanitisation and auditing in the European Union’s general data protection regulation. Eur. Data Prot. Law Rev. 2(4), 413–414 (2016) 3. Zhang, Y., Qin, Z., Xiao, L.: The discriminatory nature of big data algorithms. Nat. Dialect. Res. 5, 81–86 (2017) 4. Topol, E.J.: More accountability for bigdata algorithms. Nature 537(7621), 449 (2016) 5. Liu, P.: Construction of an open information resource sharing platform under the environment of big data mining. Electron. Technol. Softw. Eng. 18, 151 (2018) 6. Shaffer, C.A.: A Practical Introduction to Data Structures and Algorithm Analysis, 3rd edn, p. 18. (2010). https://people.cs.vt.edu/*shaffer/Book/C++3e20100119.pdf 7. Zhou, Y.: My country urgently needs to establish an artificial intelligence algorithm review mechanism. China Comput. News 012, 11–19 (2018) 8. Wang, S.: The model of China’s public policy agenda setting. Chin. Soc. Sci. (9) (2006) 9. Dong, L., Wang, S.: The changes of western administrative ethics under the division of toolvalue rationality. Chin. Adm. 1, 114–115 (2014) 10. Yu, K.: Governance and Good Governance, pp. 5–7. Social Sciences Literature Press, Beijing (2000)
The Development Strategy Research of Higher Education Management from the Perspective of “Internet + ” Ziyu Zhou(&) Sichuan University Jinjiang College, Meishan, Sichuan, China [email protected]
Abstract. As mankind enters the age of network information, profound changes happened in the working environment and mechanisms faced by college student management. Big data has become an important factor and technical environment for colleges and universities to implement the fundamental task of “building up people by virtue”. How to adapt to the development of the big data era and innovate the education and management of college students has proposed a new era mission. The research on college education management under the trend of “Internet + ” higher education management worldwide, reflects the characteristics of the times, and has the dual meaning of theory and practice. Educational thoughts such as constructivism, humanism, holistic theory, and flattening theory have an impact on higher education. This article analyzes and discusses the problems in China's high education management, and studies the development strategies of higher education management under the “Internet + ” perspective to promote the education management development. Keywords: “Internet + ” Vision Higher education Education management
1 Introduction In the “Internet + ” era, information has an impact on the entire society, and higher education management is naturally affected by it. According to the requirements of China education policy, China's higher education reform is in progress. Education reform covers a wide range of fields, and education management, as an important part of the education system, is also included in the reform. The research on higher education management under the background of “Internet + ” with conforming the trend of higher education reform, reflects the characteristics of the times, and has dual meanings of theory and practice [1].
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 444–450, 2021. https://doi.org/10.1007/978-981-33-4572-0_65
The Development Strategy Research of Higher Education Management
445
2 Problems in China’s Higher Education Management System As China's reforms continue to deepen, the state vigorously supports the reform of the education management system, and the reform of China's higher education management system has also achieved certain results. While achieving results, it has also gained greater autonomy in innovation and development [2]. However, affected and restricted by various factors, there are still many problems in China's education management system during the reform process. 2.1
Affected by Traditional Ideas and Concepts, the Overall Effect of Higher Education Management Reform is not Good
At present, due to China's long-standing constraints and influences of traditional concepts, the work of management system reform in higher education has been developing slowly, and the reform thinking of higher education management system is relatively conservative, and the overall effect of the management system reform has been minimal [3]. At the same time, the restrictions imposed by government agencies on the reform of the system of higher education management have to a large extent hindered the management of universities. This makes universities still have many problems in specific management work and lack a planning system suitable for their own development. 2.2
Lack of Clear and Effective Reference Standards, Management System Reform Cannot Be Comprehensively Improved
At present, the international education reform is showing a new situation. Under this situation, in order to further adjust the education management system, relevant Chinese departments have issued various guidelines and policies. At the same time, China has entered a market economy period, and the country is paying more and more attention to the reform of the education system [4]. However, in some cases, some higher education management systems and systems have also imposed certain restrictions on their operational reforms. 2.3
Colleges and Universities Rely on the Help of Government Departments and Fail to Fully Play Their Role
At present, each department in China has its own way of doing things, which leads to a lack of initiative in the entire reform process, which depends largely on the adjustment of government policies. With the continuous improvement and progress of education management system, colleges and universities are unable to play their role, the scope of jurisdiction has not been expanded, and the reform of the management system has been largely affected [5].
446
Z. Zhou
3 The Influence of Various Educational Thoughts on Higher Education Management in the “Internet + ” Era “Internet + ” can be interpreted like this: Big data, information and data are like a network that connects everyone together, and everyone's data and information are on this network. In essence, “Internet + ” it is to connect a certain industry more closely with the Internet. All the successful cases of “Internet + ” are in the era of big data, and find ways to creatively mine and apply data and information in this industry, shown as Fig. 1 [5].
Fig. 1. Internet + university education management system
In the information age, the Internet has brought the entire world closer, and various educational theories play a role in mutual influence. Under such a situation, the study of several educational thoughts that have an impact on higher education management has strong pertinence and timeliness. 3.1
Constructivist Education Theory
Constructivist education theory originated in the West, and the most important representative is Piaget. Piaget’s educational theory involves many contents, among which the core concept is “schema”. The original psychological structure and knowledge frame of the educated is his own schema [6]. In the process of learning, the educated people continue to accept new knowledge, and the original schema is gradually changed, re-constructed, and stabilized again. Compared with other educational theories, constructivism focuses on the acceptance and development of individuals [6].
The Development Strategy Research of Higher Education Management
3.2
447
Humanistic Education Theory
The representative figures of humanistic education theory are Maslow and Rogers. They emphasize people-oriented in the teaching process and emphasize the initiative of students in learning. Humanism believes that the essence of learning is that learners acquire knowledge, skills and develop intelligence, explore their own emotions, and realize their potential. It is not limited to the interpretation of one-sided behavior, but the expansion of the scope of the learner’s entire growth process. Explanation has the orientation of holistic education [4]. 3.3
Holistic Education Theory
The theory of holistic education is an educational thought formed on the basis of fierce criticism of traditional educational goals. It opposes putting instrumental goals above personal development goals, and believes that personal development should be better than social needs. Therefore, the curriculum setting and the cultivation of teacherstudent concept in the teaching process should be designed around the goal of cultivating complete people [3]. When it comes to the practice and application of holistic education theory in education management, we must start with holistic teachers. 3.4
Flat Learning Theory
Strictly speaking, the flat learning theory does not belong to an independent educational thought, it is a statement derived from the management field. In the past management system, the hierarchical management system of the upper and lower levels was often adopted [6]. The flat management refines the management level, can realize the “faceto-face” direct management, and greatly improves the management efficiency. Introducing such a management model to education forms flat learning, that is, learning is parallel and juxtaposed, and there is no subordination relationship, and learning can be carried out simultaneously [7].
4 Strategies of Higher Education Management Under “Internet + ” Background 4.1
Establish an Education Management System that Meets Actual Development Needs
At present, there are huge challenges and higher requirements in higher education management, and new changes happened in the development of education [8]. It is important to further improve the efficiency of educational management in domestic colleges, try to adjust their own ideology, transform the more traditional educational concepts in the past, clarify the goals of the educational management system reform, and build an education that meets actual development needs management system platform, as shown in Fig. 2. At the same time, we should have a deep understanding and exploration of our traditional values and cultural concepts, to ensure that the reform
448
Z. Zhou
of the higher education management system is carried out in a good cultural atmosphere, and to promote and develop China's excellent traditional culture [8].
Fig. 2. Internet + campus management platform
4.2
Enhancing the Autonomy of Higher Education Management and the Innovation Ability of the Education Management System
In the actual management process, China's higher education management lacks a certain degree of autonomy [9]. On the one hand, this is due to the excessive intervention of national government departments; on the other hand, the government's planning and control of higher education management are too specific. In this way, to a certain extent, it not only hinders the reform of the education management system, but also hinders the universities from giving full play to their independent innovation capabilities [9]. Therefore, in order to more autonomously seek relevant innovative reform methods and policies, the teaching management model should be changed in time when necessary to make the innovation of higher education management system more comprehensive. 4.3
Improve the Innovation of the Internal Management System of Universities and Enhance the Educational Strength of Universities
In the education management reform, the core of the reform is to improve the innovation of the internal education management system of universities [10]. First of all, in
The Development Strategy Research of Higher Education Management
449
order to ensure a clear definition of responsibilities at all levels, colleges and universities should divide internal administrative organs into scientific and reasonable levels, and master the division of labor according to their actual functions. Secondly, in the reform of the management system of colleges, colleges and universities should improve their educational strength according to their own development needs, combining their own characteristics of running a school, aiming at actual talent training, and combining their unique regional characteristics [10].
5 Conclusion In summary, the current Chinese higher education management authority lacks clear boundaries. When reforming the education management system, a comprehensive education management system development concept should be established to promote the coordinated development of the relationship between schools, society and government departments. At the same time, the education development lacks complete laws and regulations. Therefore, it is necessary to continuously improve the strategies of the internal management system of colleges, and strive to realize the autonomy of higher education management, so as to better promote the management system reform of colleges.
References 1. Sun, J.: A brief talk on the enlightenment of foreign higher education management systems to Chinese higher education management. J. Jilin Province Educ. Coll. 5(09), 40–41 (2014) (in Chinese) 2. Chen, Q.: Research on the reform of adult higher education management system based on mutual recognition of learning achievements. J. Hebei Normal Univ. 9, 75–80 (2012) (in Chinese) 3. Ma, Y., Fan, Q.: The enlightenment of American higher education management system to China’s higher education reform. J. China Univ. Petrol. 15(04), 105–108 (2014) (in Chinese) 4. Yu, Y.: Research on the professional growth of adult college teachers in the context of Internet +. Adult Educ. 8(05), 80–82 (2017) (in Chinese) 5. Chen, J., Lu, G.: Research on higher education management under the background of “Internet +.” Educ. Teach. Res. 11(02), 129–130 (2009) (in Chinese) 6. Zhang, X., Wang, W.: Research on the construction of comprehensive evaluation system for college students in the “Internet +” era. J. Zhoukou Normal Univ. 14(02), 120–122 (2018) (in Chinese) 7. Xiong, Y., Wu, C.: Investigation and research on the process of marxism popularization in colleges and universities based on big data thinking method. Educ. Teach. Res. 9(08), 14–17 (2018) (in Chinese) 8. Ma, J., Li, C., Wang, S.: An analysis of the theoretical approach of college counselors in the new era from the perspective of “big data.” Coll. Counselor 7(02), 9–11 (2018) (in Chinese)
450
Z. Zhou
9. Sun, J.: A brief talk on the enlightenment of foreign higher education management systems to the management of Chinese universities. J. Jilin Province Educ. Coll. 22(09), 40–42 (2018) (in Chinese) 10. Chen, Q.: Research on the reform of adult higher education management system based on mutual recognition of learning achievements. J. Hebei Normal Univ. 9(03), 75–78 (2017) (in Chinese)
Study on Data Transmission of Low Voltage Electrical Equipment Based on MQTT Protocol Xueyu Han(&) State Grid Information and Telecommunication Group Co., Ltd., Beijing, China [email protected]
Abstract. In the process of large-scale machine-to-machine (M2M) communication in the IoT, the Request/Response mode applied by the traditional network system can be no longer applicable due to which high network performance demand and poor expansion ability. With the characteristics of weak coupling and asynchronous communication, Publish/Subscribe mode had become an important platform in the new generation of large distributed application system. This article summarized the research status of publish/subscribe systems. According to the application scenario of data transmission of low-voltage electrical equipment, this paper discusses the realization method of data transmission system based on Message Queuing Telemetry Transport (MQTT) technology. By analyzing the characteristics and simulation of MQTT protocol, it is verified that the subscription publishing model can effectively organize and publish information in the mobile computing environment and has good effectiveness. Keywords: Publish/subscribe IoT
MQTT Low-voltage electrical installations
1 Introduction With the continuous improvement of the low-voltage side monitoring system of the power distribution network, many problems have emerged, such as the large number of types of power data acquisition equipment, non-universal communication protocols, poor anti-interference performance of communication, and the intelligent and standardized construction of low-voltage electrical equipment has become a research hotspot. The state grid equipment department promotes the development of applications related to intelligent distribution transformer supervisory terminal unit, and gradually increases the control function of substation terminals based on the Internet of things at the side of the distribution master station. In wired communication mode, power line carrier (PLC) communication mode can adapt in most low-voltage side electrical equipment communication mode at present. However, problems such as limited frequency resources and poor flexibility of communication line fixed in the practical application process limit the convenience of data transmission. And in the wireless communication mode, it can be classified by four-layer network protocol. In © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 451–458, 2021. https://doi.org/10.1007/978-981-33-4572-0_66
452
X. Han
the physical layer, LoRa, ZigBee, WIFI, Bluetooth and other technologies are applied according to different scenarios. While the networking mode of these technologies is different from the network deployment mode, the application layer protocol on the upper layer is the same. In the application layer protocol, the HTTP protocol based on request/response pattern has higher requirements on the stability of communication network, weak system expansion ability and general flexibility. Moreover, HTTP protocol mostly uses short connections, and each data transmission TCP protocol has to go through multiple handshakes to realize connection establishment and disconnection, which wastes time and bandwidth and thus generates a lot of protocol overhead [1–3]. In this paper, a data transmission model based on MQTT protocol is designed, which can support long-distance data transmission by wireless means through 4G communication, improve data acquisition and transmission efficiency, and promote the intelligent construction of low-voltage distribution network of things.
2 MQTT Overview The publish/subscribe pattern is a scheduling forwarding pattern that includes middleware, namely broker, in which each participant can achieve information interaction through this pattern. And there are three parts in a complete publish/subscribe system, namely client, server and events. Event is the general name of information that participants interact with in the system. The client consists of publisher and subscriber, among whom publisher is producer of information event and subscriber is consumer of information event. For the diversity of events in the system, the publisher of one event could also be a subscriber of another event, so it can be collectively referred to as the client. The server side takes the broker as the middleware [4]. As shown in Fig. 1, the publishers can publish events to the system broker to share information, and the subscribers can send subscription conditions to the broker to get the events they are interested in. Once the broker matches all the subscription conditions from the various types of information, it can send the corresponding events published by publishers to the subscription immediately. And when subscribers are no longer interested in an event, they can unsubscribe to the broker [5]. In an actual publish/subscribe system, there can be multiple event agents, and each of them can serve multiple clients, the performance evaluation of a publish/subscribe system mainly includes the ability to express events through data structures, the response time efficiency of algorithms in the broker, the scalability of the system, and the quality of service provided [6]. MQTT protocol has the characteristics of small amount of code usage, less bandwidth consumption and high immediateness. It supports clients in multiple programming languages, enables long connections, faster outage recovery, and is widely used in IoT, small embedded terminals, mobile applications and in restricted environments such as M2M communications. And the payload content is masked during message transmission for the reason events in publish/subscribe mode are independent of device address. And the communication between clients and agents is based on publish/subscribe mode, which mainly reflected in the sequential, lossless, bidirectional transmission based on byte stream on the underlying network after establishing
Study on Data Transmission of Low Voltage Electrical Equipment
453
Fig. 1. Conceptual of the publish/subscribe system [7]
connection between them [8]. The message transmitted by the protocol includes two parts: topic and load. Subscribers subscribe to a certain message type (topic) to the broker and receive the message content of this topic when the server matches.
3 System Overview MQTT protocol can be an open protocol, and many IoT messaging systems realize the function of MQTT for message publishing and subscription to a certain extent, Fig. 2 shows the flow of data transmit through the MQTT protocol.
Fig. 2. Data transmission system based on MQTT protocol
According to the object-oriented design principle, the overall framework of the system adopts C/S framework, which can be divided into MQTT broker and MQTT client. In the construction of push function, the client-broker messaging function builds the client object by default parameters then sets the user name, password, heartbeat interval, and the callback listener to keep listening to handle asynchronous events. The system connects the message broker through the connect action in the protocol. Once the broker connects successfully, it can get the publication topic and its corresponding payload, as well as set the quality of service [9, 10]. The MQTT broker is implemented through the mosquitto Project, an open source messaging proxy for the MQTT protocol. It can be suitable for low-power, all-network devices, and provides the mosquitto_sub and mosquitto_pub commands. And the mosquitto_db commands can be constructed for all the receiving information in the broker that stored in a linked list, which references the corresponding message by visiting the nodes of the list.
454
X. Han
MQTT client can be both a message subscriber and a publisher. To make a difference, MQTT1 client uses Paho project to realize the computer client, and MQTT2 client uses handheld device as the client. MQTT1 client is a Windows client that installs EclipsePaho applications. The client provides basic functions such as TCP connection, disconnect, send, receive, encapsulation and resolution of MQTT protocol. MQTT2 client develops software for Android client based on Android Studio and integrates MQTT protocol on the virtual machine. Since the software is easy to integrate, MQTT frame package, dependencies and services and permissions can be added.
4 Implementation Procedure The implementation of data transfer mainly includes configuration file parameter setting, custom calling function, method of declaring information publication and information subscription. As shown in Fig. 3, the system firstly conducts program initialization configuration, then the agent and client maintain long connection through MQTT protocol, the function of information publication or subscription can be selected to realize instant message push, and return the callback function after publication and subscription.
Fig. 3. Flow chart of the overall system implementation
In order to implement the MQTT-based data transmission system, the following test environment is designed: a message broker server, which runs the Mosquitto1.6.8 agent on the ubuntu18.4.3 version virtual machine; a messaging subscription computer to run EclipsePaho1.0.2 project under Windows10 system, mainly responsible for running messaging subscription side projects; and a virtual phone created through Android Studio3.0.1 is responsible for the messaging side project.
Study on Data Transmission of Low Voltage Electrical Equipment
4.1
455
MQTT Broker Design
After the mosquitto project finished installing and related configurations at ubantu, it has initially completed building the MQTT broker, which has the basic functions of starting agent, managing password, publishing and subscribing information. And enabling the MQTT proxy side requires a simple setup. Firstly, it would be necessary to update package manager to complete the ubantu repository update before installing software on Linux system, and then check its running status through command line to confirm that broker server has been activated. In the default configuration, the software is boot-enabled, and can be launch manually from the command line if it had not been activated. While the MQTT broker-side functionality can be validated through native testing, and specific requirements in the actual application can be implemented by changing the default configuration. And the main configuration parameter types and configuration contents contained in the configuration file, including general configuration, server listening configuration, persistence, security and bridging. 4.2
Android Client Design
Fig. 4. Android client creation flowchart
The project takes Android virtual machine as the visual interface of the client, and realizes long connection with the mosquitto agent, then carries out message publishing. The design process is shown in Fig. 4. Firstly, configure the intrinsic parameters, including basic settings such as user name, password, heartbeat time and default reconnect, the server address needs to find the mosquitto broker address and then manually add it in the main function of the project, mainactivity.java, where the client ID must be unique otherwise the conflict could causes the connection to fail. After completing the initialization configuration, the custom callback function would listen to the information sent by the server, and return the connectionLost method for reconnection when the connection establishment fails or the connection was broken. And it would return the deliveryComplete method when the information was published to indicate that the publication is complete. Since the connection is made asynchronously, the way it’s called is by setting up a private variable, mqttConnectOptions. If the connection fails, the callback function will be used to automatically reconnect after the system refresh interval arrives. And when the connection is
456
X. Han
established successfully, a message can be published, which contains the subject, content and quality of service. 4.3
Windows Client Design
Project takes Windows system computer as the other client, realizes long connection with the mosquitto agent and carries out message subscription. The eclipsepaho software for engineering application realizes subscription function, which construction principle is similar to the implementation of Android client. The specific method of information subscription is realized through callback function, creating method functions of messageArrived. When the subscription request information is sent to the agent and returned, relevant topic name and information content can be printed once the subscription is successful.
5 Implementation Result 5.1
System Implementation Results
Fig. 5. Data transmission system test based on MQTT protocol
Figure 5 shows the test of data transmission system based on MQTT protocol. The Android client, as the information publisher, sends topics and corresponding contents to the broker. For the broker, it would start the mosquitto program and loads the configuration file for initialization configuration to implement MQTT service initialization, and then keep the monitor at the subscriber and publisher ends through the port. And the Windows clients, as information subscribers, subscribe to topics to the broker. To further verify the performance of MQTT-based publish/subscribe mechanism, packet analysis control messages are captured by Wireshark network packet analysis software. Figure 6 is the PUBLISH package of information event issued by the agent, which message is expressed in hexadecimal and the first two bytes are binary, corresponding to the message type of fixed header of the protocol message. The format of the packet in the figure conforms to the MQTT protocol specification, which can realize
Study on Data Transmission of Low Voltage Electrical Equipment
457
the instant release of messages. Through experimental verification, there is no packet loss during the large-scale packet transmission based on MQTT protocol.
Fig. 6. Publish captures packets
5.2
Practical Application
Data transmission system of low-voltage electrical equipment based on MQTT protocol, as a part of industrial energy consumption gateway, has been applied to embedded hardware systems such as intelligent terminals. Figure 7 shows part of the print information of AC sampling data transmission collected by the intelligent terminal in the low-voltage distribution system. Choosing mosquitto as the broker, power acquisition information stays in the register database of the terminal’s smart terminal at launch, and at subscription time reads the desired traffic acquisition information from the terminal’s register database.
Fig. 7. Application of MQTT protocol in intelligent terminal
6 Conclusion With the increasing demand for information sharing and the expanding range of mobile applications of small devices, it can be necessary to exchange relevant data between different software systems. Analysis results show that the MQTT protocol can effectively complete event notification, information sharing according to customer demand, and the building of low voltage electrical equipment data transmission system based on the MQTT protocol can implements MQTT instant message delivery, and does not appear lost package phenomenon in multiple tests, which has good effectiveness, and the desired effect is worth popularization and application.
458
X. Han
References 1. Bai, X.-F., Shi, C.-K., Guan, S.-L., et al.: Design and application of communication protocol detection system for distribution terminal. Power Inf. Commun. Technol. 10, 13–19 (2019) 2. Lin, L.-d., Wang, C.-h.: Research on real-time publish and subscribe system and its related technologies. Fujian Comput. 26(1), 35–36 (2010) 3. Jiang, N., Zhang, Y., Zhao, Z.-j.: Push system based on telemetry transmission of message queue. Comput. Eng. 41(455(9)), 7–12 (2015) 4. Eugster, P.T., Felber, P.A., Guerraoui, R., et al.: The many faces of publish/subscribe. ACM Comput. Surv. 35(2), 114–131 (2003) 5. Thangavel, D., Ma, X., Valera, A.C., et al.: Performance evaluation of MQTT and CoAP via a common middleware. In: 2014 IEEE Ninth International Conference on Intelligent Sensors, Sensor Networks and Information Processing. IEEE (2014) 6. Ma, J.-G., Huang, T., et al.: Core technology of publish and subscribe system for large-scale distributed computing. J. Softw. 1, 17–19 (2006) 7. Hu, C., Luo, D.-h., Tong, H.: Industrial energy gateway system design based on the integration of Modbus and MQTT. Technol. Internet Things 009(004), 49–54 (2019) 8. Liu, T., Li, G., Meng, Y., et al.: Research and application of Internet of things modules in smart electricity meters. Electric Power Inf. Commun. Technol. 11, 63–69 (2019) 9. Cheng, W.-b., : Construction of Internet of things message push system based on MQTT protocol. Inf. Comput. (Theoret. Ed.) 18, 23–27 (2019) 10. Sun, J.-p., Sheng, W.-x., Wang, S.-a.: Research and implementation of real-time publisher/subscriber model based on Ethernet. J. Xi’an Jiaotong Univ. 36(12), 1299–1302 (2002)
Residual Waste Quality Detection Method Based on Gaussian-YOLOv3 Zhigang Zhang1, Xiang Zhao1, Ou Zhang2, Guangjie Fu2, Yu Xie4, and Caixi Liu3,4(&) 1
Shanghai Environment Group Co., Ltd., Shanghai 200001, China Shanghai Environmental Logistics Co., Ltd., Shanghai 200063, China Bai-Tech (Shanghai) Industrial Technology Co., Ltd., Shanghai 200000, China [email protected] 4 Shanghai Baosight Software Co., Ltd., Shanghai 200203, China 2
3
Abstract. In the process of garbage collection, the water flow in residual waste directly affects the process of collecting residual waste. Therefore, detecting the water flow in residual waste at the garbage transfer station is of great guiding significance for garbage disposal. In this paper, the Gaussian-YOLOv3 algorithm with high accuracy and real-time performance is used to identify and detect the water flow during the dumping process of residual waste, and determine the quality of the classification of residual waste according to the recognition situation. The experimental results show that the residual waste quality detection method based on the Gaussian-YOLOv3 algorithm can accurately identify the amount of the water flow during the dumping of the residual waste. At the same time, the back annotation and retraining method significantly reduces the model's impact on similar residual waste in complex environments. The false recognition rate satisfies the actual needs of residual waste water flow identification and improves the efficiency of residual waste classification quality determination. Keywords: Deep learning quality recognition
Target detection Gaussian-YOLOv3 Waste
1 Introduction As early as the 1970s, many countries have begun to carry out the classification and recycling of municipal solid waste, such as Germany, the United States, Canada, Switzerland, etc. In terms of solid waste classification, Western developed countries have gained more than half a century of research and experience, and each complete classification system has been formed and remarkable results have been achieved, which has important reference significance for the classification of domestic waste in China. With the development of economy, China's solid waste is increasing day by day, which brings greater pressure to the sustainable development of resources, environment and economic society [1]. In 2019, Shanghai formally implemented the “Regulations of Shanghai Municipal Solid Waste Management”, which will compulsorily promote waste classification through laws and generally implement a solid classification system © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 459–469, 2021. https://doi.org/10.1007/978-981-33-4572-0_67
460
Z. Zhang et al.
in residential areas. The quality of waste classification directly affects the process of residual waste recycling and disposal. At present, garbage transfer stations mainly judge the quality of garbage classification manually with the disadvantages of high cost, low efficiency, and high false detection rate. Traditional machine vision technology has been applied to simple garbage type identification [2–10]. Sudha et al. [7] proposed an automatic identification system for automatic classification of garbage. The system divides garbage into three categories, namely, metal, residual waste and kitchen waste with a simple micro-controller forms the core of the system. The system integrates deep learning algorithms in image recognition technology, which can identify objects and almost accurately classify the three types of garbage. The main disadvantage of this system is that it takes a long time to identify garbage, and the size of garbage must be less than or equal to the size of the funnel, so that many garbage cannot be effectively classified. For example, electronic garbage, sanitary garbage and medical garbage cannot be classified by this system. George et al. [8] used machine learning technology to automatically identify and classify garbage, using two popular deep learning algorithms: Convolutional Neural Network (CNN) and Support Vector Machine (SVM). Each algorithm creates a different classifier that divides garbage into 3 categories: plastic, paper, and metal. Compare the accuracy of the two classifiers in order to select the best classifier to integrate into the control mechanical system and guide the waste from its initial position to the corresponding container. Through comparison, it is found that the classification accuracy of SVM is as high as 94.8%, while that of CNN is only 83%. At the same time, SVM also shows special adaptability to different types of waste. Chen et al. [9] proposed a robot grasping system based on machine vision to automatically classify garbage. Before using the manipulator, the system realizes the identification, positioning and automatic picking and sorting of garbage under complex background. They use deep learning methods to realize the authenticity recognition of target objects in complex backgrounds. In order to achieve accurate capture of the target object, they applied the Regional Produce Network (RPN) algorithm and the VGG-16 model to object recognition and location positioning. The machine vision system sends the geometric center coordinates and the angle information of the long side of the target object to the manipulator to complete the classification and grab the target object. The results show that the vision algorithm and manipulator control method of the system can effectively realize waste classification. Chu et al. [10] proposed a Multi-layer Hybrid-learning System (MHS) to automatically classify garbage in urban public areas. The system uses high-resolution cameras to capture discarded images and sensors to detect other useful feature information. The MHS system uses CNN-based algorithms to extract image features and a Multi-Layer Perceptron (MLP) method to combine image features and other feature information, and classify garbage into recyclable and non-recyclable. MHS has conducted training and verification for manually marked items, and achieved overall classification accuracy higher than 90% in two different test backgrounds, which is significantly better than other machine learning algorithms. The recognition accuracy of traditional machine vision technology is not high, especially in complex backgrounds. It cannot meet the requirements of complex industrial applications. With the development of deep learning technology in recent years, machine recognition technology has also entered a new stage of development,
Residual Waste Quality Detection Method Based on Gaussian-YOLOv3
461
especially the direction of image target detection (image detection refers to identifying the type of object in the image and marking the location of the object). At present, many target detection algorithms are gradually applied in various industrial complex backgrounds. Target detection frameworks based on deep learning have been released successively [11, 12]. One type is based on two-stage target detection frameworks, such as RCNN, Fast RCNN, and Faster RCNN. The detection tasks are divided into location and classification tasks [13]. Such algorithms usually have high accuracy, but also have the disadvantages of relatively slow detection speed and low efficiency. There are also target detection frameworks based on one-stage, such as YOLO/YOLOv2/YOLOv3, SSD, etc. [14–17], which completes the detection and locating tasks at the same time, and treats the target detection problem as a location problem with simple network structure and fast detection speed. Through the accuracy is relatively low, it can meet the requirements of real-time detection. At present, most researches focus on the fist process of garbage classification [18– 20], and there are few studies on the location and identification of garbage in this complex background. The garbage transfer station has a harsh environment and various types of garbage. The water content of residual waste directly affects its subsequent disposal methods. For this complex situation, this paper uses the Gaussian-YOLOv3 algorithm based on deep convolutional neural networks to deal with the residual waste. During the process of waste dumping, the flow of water in waste is identified and analyzed to provide substantial guidance for the subsequent disposal of residual waste.
2 Experimental Data Collection The residual waste image data of this experiment was collected at a garbage transfer station in Shanghai. Under the conditions of different light intensity, different background, different angles, and different distances and sizes, the process of dumping garbage into garbage containers by residual waste trucks is captured by industrial highdefinition cameras to obtain photos of residual waste dumping and select two types of images, as shown in Fig. 1. The first category of residual waste images: there is no water flow in the photo but there are water flow-like long plastic bags, as shown in Fig. 1(a); The second type of residual waste image: there are water flows in the photos, and the number of water flows varies, as shown in Fig. 1(b).
Fig. 1. Types of residual waste images: (a) Type 1; (b) Type 2
462
Z. Zhang et al.
A total of 8,000 images of the two types of residual waste were collected, including 2,000 images of the first type and 6,000 images of the second type. Then, all the collected images were uniformly cropped to 512 512 pixels. Use the annotation tool to mark the target location and category of the water flow in the residual waste image, as shown in Fig. 2. There are three types of water flow: small water, big water, and other (this category is water flow-like flocculent residual waste, such as long garbage bags, etc.). After annotating, each image will generate an XML file, and then extract the annotation information of all the XML files generated after the annotation, and generate it into a CSV file. The CSV file contains the category of garbage and the location information of the rectangular box.
3 Algorithm Background YOLOv3 [16] is the third edition of the YOLO (You Only Look Once) series of target detection algorithms. Compared with the previous algorithms, especially for small targets, the accuracy has been significantly improved. The main improvements of YOLOv3 include: adjusting the backbone network structure, using multi-scale features for object detection, and replacing Softmax with Logistic for object classification.
Fig. 2. YOLOv3 network structure
YOLOv3 uses darknet-53 as the backbone network. Its network structure is shown in Fig. 2. Darknet-53 refers the idea of ResNet that each residual module consists of two convolutional layers and a shortcut connection. There is only residual module in the entire YOLOv3 structure, without pooling layer and full connection layer. YOLOv3 uses residual skip connections to solve the problem of gradient disappearance of deep networks, and use up-sample and cascading methods to retain fine-grained features for small object detection. Each convolutional layer is followed by a batch normalization layer and a LeakyRelu activation layer, and introduces the ResNet residual module to solve the training degradation problem that occurs when the network is processing. The
Residual Waste Quality Detection Method Based on Gaussian-YOLOv3
463
most prominent feature of YOLOv3 is the detection on three different scales in a similar manner to the pyramid network, so that it can detect objects of various sizes.
Fig. 3. YOLOv3 prediction box calculation structure
In more detail, when the images of the three channels of R, G, and B are input to the YOLOv3 network, as shown in Fig. 3, information about object detection (ie prediction box coordinates, object scores and category scores) will be output. From the three detection layers, non-maximum suppression is used to merge and process the prediction results of the three detection layers to determine the final detection result. As stated in the output result of YOLOv3, the target category has a probability, but the prediction box has only a location without a probability, that is, the reliability of the current prediction box cannot be predicted from the result. The background studied in this paper is very complex, and there are many situations where the detection target and the background are very similar. Therefore, this paper uses Gaussian-YOLOv3 algorithm for modeling and training based on this background to improve the detection accuracy of the algorithm. Gaussian-YOLOv3 can output the reliability of each prediction box without basically changing the structure and calculation amount of YOLOv3, which improves the detection accuracy of the model from the overall performance of the algorithm. Gaussian-YOLOv3 is mainly based on YOLOv3. By increasing the output of the network and improving the loss function of the network, it realizes the output of the reliability of the prediction box. As shown in Fig. 4, Gaussian-YOLOv3 uses a Gaussian model to model and calculates the reliability of the prediction box in the output. Therefore, Gaussian-YOLOv3 includes 8 dimensions in the coordinate prediction output of the prediction box, which are the coordinate position of the prediction box (including the center coordinates x, y and the width, height, w, h), and the uncertainty of the four coordinate positions of the corresponding prediction box. As the output of Gaussian-YOLOv3 has been adjusted, the calculation of the corresponding loss function will also be adjusted accordingly. Compared with the original YOLOv3, only the regression strategy of the coordinate position of the
464
Z. Zhang et al.
Fig. 4. YOLO V3 prediction box calculation structure
prediction box is adjusted. As shown in the comparison code below, when the original YOLOv3 performs the prediction box regression, since the network prediction output is the coordinate itself, the mean square error method is used when calculating the gradient. Since Gaussian-YOLOv3 outputs the mean and variance, the Gaussian distribution strategy is combined when calculating the gradient.
4 Experiment and Result Analysis The experimental hardware platform environment is shown in Table 1: Table 1. Experimental hardware platform environment Name Operating system CPU GPU Deep learning framework
Configuration Ubutun18.04.3 LTS Intel Core [email protected] GHz Nvidia GeForce RTX 2080Ti (11G video memory) Darknet framework
The whole experiment is based on the deep learning Darknet framework. This paper adopts the idea of transfer learning. The initialization parameter weights of the model training use the weight file (Gaussian_yolov3_BDD.weights) provided by the author of Gaussian-YOLOv3. The batch size is set to 4 during training, and optimizer is Stochastic Gradient Descent (SGD), with an initial learning rate of 0.0001, momentum of 0.9, and weight decay regular coefficient of 0.0005. On this basis, all layers of the network are trained for 200k iterations. In the training process, the model parameters with the smallest loss are saved by comparing the loss, and the final weight file is obtained through multiple experiments and frozen as a detection model. In order to evaluate the effectiveness of the algorithm for detecting water flow in residual waste, the mean Average Precison (mAP) is used as an indicator to measure the performance of the model [12]. mAP refers to the average value of the Average
Residual Waste Quality Detection Method Based on Gaussian-YOLOv3
465
Precision (AP) of all types of targets, and it can be used as the actual metric for target detection. The data set of the experiment in this paper is composed of 8000 photos of residual waste in a complex environment, of which there are 2000 photos in first classification (this type of photos has no water flow but has flocculent residual waste similar to water flow, so only the “other” type has such annotation information). There are 6000 photos of the second classification (with water flow, and the annotation type of small water and big water). Because the environment in which the model is applied is particularly complex and the forms of residual waste are diverse, flocs similar to water flow are easily misidentified as water flow. In order to reduce the false rate of such floc residual waste (such as long garbage bags, etc.). In this paper, the experiment has trained two different data sets, as shown in Table 2. Data set I only includes the second type of photos, that is, only the residual waste photos with water flow; while the data set II includes all the photos of the two types. For the model training of the two data sets, the data of the training set and the test set are randomly selected from their respective samples at a ratio of 8:2. Table 2. Experimental hardware platform environment Data set Sample Training set (photo) Test set (photo) I Second type 4800 1200 II First and second type 6400 1600
This paper first compares the training precision of the Gaussian-YOLOv3 and YOLOv3 algorithms on the two data sets. The specific test results are shown in Table 3. Table 3. Comparison of test results between Gaussian-YOLOv3 and YOLOv3 Data Set Algorithms I YOLOv3 II YOLOv3 I Gaussian-YOLOv3 II Gaussian-YOLOv3
mAP (%) FPS 75.4 42.2 81.9 41.9 82.4 41.7 87.5 41.0
According to the test results of data set I and data set II, it can be found that compared with YOLOv3, Gaussian-YOLOv3’s mAP is increased by 7.0 and 5.6 respectively, and the inference speed is almost the same as that of YOLOv3. In the case that the input image size is not 512 512 pixels, the recognition speed can reach more than 40 fps. It can be seen that the model trained by Gaussian-YOLOv3 has better adaptability to different data sets. The mAPs of YOLOv3 and Gaussian-YOLOv3 algorithms on data set I are 75.4 and 82.4, respectively, while in data set II they reach 81.9 and 87.5, respectively. Compared with data set I, mAPs of YOLOv3 and Gaussian-YOLOv3 in data set II
466
Z. Zhang et al.
increased by 6.5 and 5.1 respectively. Experiments show that in a complex environment, when there is flocculent residual waste similar to water flow in residual waste, the flocculent residual waste that similar to water flow are marked separately and distinguished from the type of water flow. The marked flocculent residual waste photos and the residual waste containing water flow photos are merged into a data set and put into the algorithm for training, which can improve the precision of the model, and at the same time can greatly reduce the false rate of the model (that is, the model's false rate of flocs is significantly reduced). Therefore, adding reverse samples of similar targets to the training samples can improve the overall precision of the algorithm model, especially in such complex garbage backgrounds. In order to visually evaluate the detection effects of YOLOv3 and GaussianYOLOv3 on water flow in residual waste in complex environments, Fig. 5 shows the detection examples of the two algorithms on the test data of data set II. The purple box is the big water and the blue box is the small water. In Fig. 5, the first column is the detection effect map of YOLOv3, and the second column is the detection effect map of Caussian-YOLOv3. Through comparison, it can be found that YOLOv3 recognizes the flocculent residual waste when the residual waste that similar to the water flow appears in the picture, and it is incorrectly identified as water flow. Gaussian-YOLOv3 uses Gaussian modeling to predict the output of the pre-selection box, which improves the learning precision of the pre-selection box, thereby improving the detection performance of the water flow target. The false rate of flocculent residual waste is significantly lower than that of YOLOv3, and there is less false recognition. Even when the water flow is very small, false recognition is rarely seen, and the accuracy rate is significantly improved. The comparison of the results in the fourth row of Fig. 5 shows that YOLOv3 is more difficult to detect targets in more complex environment, while Gaussian-YOLOv3 can detect targets with such complex environment, thereby improving the model's target recall rate. Based on the above results, compared to the YOLOv3 algorithm in the harsh environment of the garbage transfer station, the Gaussian-YOLOv3 algorithm has better accuracy in identifying and detecting water flow in the process of residual waste dumping, and the effect is better. At the same time, the Gaussian-YOLOv3 algorithm can accurately identify the water flow in the residual waste. When the water flow is small and there is more flocculent residual waste that similar to the water flow, the Gaussian-YOLOv3 algorithm can identify the small water flow without false recognition.
Residual Waste Quality Detection Method Based on Gaussian-YOLOv3
467
Fig. 5. Test results of YOLOv3 and Gaussian-YOLOv3 on Data II: The first column is the test result of YOLOv3; the second column is the test result of Caussian-YOLOv3
468
Z. Zhang et al.
5 Conclusion Aiming at the detection and identification of water flow in the process of residual waste dumping in complex backgrounds, this paper elaborates on the Gaussian-YOLOv3based water flow detection method, including data set collection, algorithm principles, model training and optimization. At the same time, this paper compares the recognition accuracy of YOLOv3 and Gaussian-YOLOv3 on the water flow in residual waste in complex environments. Through the comparison of experimental results, it is found that compared with the YOLOv3 algorithm, the Gaussian-YOLOv3 algorithm has fewer false recognition of water flow, lower rate of missed detection, and higher model accuracy. At the same time, this paper found that by back-annotating the flocculent waste in the residual waste photos that similar to water flow, and putting such photos and photos with water flow into the model for training, the accuracy of the model can be significantly improved, and the false recognition rate of model can be effectively reduced with significant reductions in the model's false recognition of residual waste photos that similar to water flow. At the same time, the results show that the residual waste water flow detection method based on Gaussian-YOLOv3 can accurately identify the water flow in residual waste, meet the requirements of residual waste quality identification, replace manual detection, and improve the detection efficiency of residual waste quality identification.
References 1. Chen, H.: Difficulties and countermeasures of classification and management of municipal solid waste in Shanghai. Sci. Dev. 110(01), 79–86 (2018) 2. Xie, Y., Zhi, H.: A review of research on image recognition technology based on machine vision. Sci. Technol. Innov. (7), 74–75 (2018) 3. Ruiz, V., et al.: Automatic image-based waste classification metrology. Springer, Cham (2019) 4. Yang, M., Thung, G.: Classification of trash for recyclability status. CS229 Project Report (2016) 5. Rad, M.S., von Kaenel, A., Droux, A., et al.: A computer vision system to localize and classify wastes on the streets. In: International Conference on Computer Vision Systems, pp. 195–204. Springer, Cham (2017) 6. Mittal, G., Yagnik, K.B., Garg, M., et al.: Spot garbage: smartphone app to detect garbage using deep learning. In: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pp. 940–945. ACM (2016) 7. Sudha, S., Vidhyalakshmi, M., Pavithra, K., et al.: An automatic classification method for environment: friendly waste segregation using deep learning. In: 2016 IEEE Technological Innovations in ICT for Agriculture and Rural Development (TIAR). IEEE (2016) 8. Sakr, G.E., Mokbel, M., Darwich, A., et al.: Comparing deep learning and support vector machines for autonomous waste sorting. In: IEEE International Multidisciplinary Conference on Engineering Technology. IEEE (2016) 9. Zhihong, C., Hebin, Z., Yanbo, W., et al.: A vision-based robotic grasping system using deep learning for garbage sorting. In: 2017 36th Chinese Control Conference (CCC). IEEE (2017)
Residual Waste Quality Detection Method Based on Gaussian-YOLOv3
469
10. Chu, Y., Huang, C., Xie, X., et al.: Multilayer hybrid deep-learning method for waste classification and recycling. Comput. Intell. Neurosci. (2018) 11. Zhao, Z.Q., Zheng, P., Xu, S.T., et al.: Object detection with deep learning: a review. IEEE Trans. Neural Netw. Learn. Syst. 30, 3212–3232 (2018) 12. Wu, X., Sahoo, D., Hoi, S.C.H.: Recent advances in deep learning for object detection. Neurocomputing 396, 39–64 (2019) 13. Ren, S., He, K., Girshick, R., et al.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2015) 14. Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017) 15. Redmon, J., Divvala, S., Girshick, R., et al.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788. IEEE Press, New York (2016) 16. Redmon, J., Farhadi, A.: YOLOv3: an incremental improvement. In: IEEE Conference on Computer Vision and Pattern Recognition (2018) 17. Liu, W., Anguelov, D., Erhan, D., et al.: SSD: single shot multibox detector. In: European Conference on Computer Vision. Springer International Publishing (2016) 18. Peng, X., Li, J., Li, W., et al.: Research on garbage identification and classification based on SSD algorithm. J. Shaoguan Univ. 040(006), 15–20 (2019) 19. Mingjie, W.: Automatic garbage location and classification method based on YOLO V3. Wireless Internet Technol. 20, 110–112 (2019) 20. Choi, J., Chun, D., Kim, H., et al.: Gaussian YOLOv3: an accurate and fast object detector using localization uncertainty for autonomous driving (2019)
An Empirical Study of English Teaching Model in Higher Vocational Colleges Based on Data Analysis Wenbo Zhao(&) Jilin Engineering Normal University, Changchun 130052, Jilin, China [email protected]
Abstract. With China’s accession to the WTO, the rapid development of market economy and the acceleration of global integration, China’s international status continues to improve. Professional English continues to infiltrate professional positions in various industries, and English teaching in higher vocational industries is particularly important. The goal of China’s higher vocational English course is to train students to use the English which they have learned in future professional positions. Based on a questionnaire survey of 324 students from Jilin Engineering Normal University, the article conducted a one-year empirical study on two tourism management classes from the perspective of the EOP Teaching Theory. Through comparative analysis of the data before and after the experiment, it is proved that the career-oriented teaching model is effective in vocational college English teaching. Keywords: Professional competence English teaching EOP teaching theory
1 Introduction With the frequent economic exchanges with foreign countries and the continuous development of society, students in higher vocational colleges have more opportunities to participate in foreign-related activities, especially in foreign investment and joint ventures. This requires them to have basic oral English communication ability. However, in actual teaching, the goal of vocational college English teaching is still on the AB level test of English application ability, and the teaching content is still mainly focused on vocabulary, grammar and reading. The teaching model is very conservative and traditional, and most English classrooms still adopt the “injection” and “cracking” models. Classroom teaching is dull and boring, mechanically repetitive. It neither focuses on the post after graduation, nor is it oriented on the knowledge and skills required by the major. Many graduates are not well qualified for future jobs due to the lack of corresponding English application ability and practical experience, coupled with the limitation of professional knowledge level. In order to break this deadlock, adapt to the needs of the market and positions, it is very necessary for us to begin to explore new and more suitable teaching methods [1–4].
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 470–475, 2021. https://doi.org/10.1007/978-981-33-4572-0_68
An Empirical Study of English Teaching Model in Higher Vocational Colleges
471
2 Literature Review ESP (English for Specific Purposes) refers to English related to a specific occupation or subject, and is an English course based on the learner’s specific purpose (Hutchinson & Waters 1987). ESP has also been one of the hotspots in the field of foreign language education in recent years and has attracted the attention of English teachers in vocational colleges. This is because ESP links English teaching with students’ future careers and is highly targeted and consistent with the goals and requirements of higher vocational education for training high-quality applied talents. Although there have always been different views on the connotation of ESP, and no conclusion has been formed, the academic scholars generally believe the dichotomy proposed by Jordna (1997), which is based on the learner’s final language use purpose and environment, namely EAP (English for Academic Purposes) and EOP (English for Occupational Purposes), namely Academic English and Professional English. EOP is a teaching system with clear goals, strong pertinence, and practicality, which is suitable for students’ professional ability training. The EOP teaching system emerged in Western countries in the 1950s and 1960s. Hutchinson &Waters (1987) proposed that the EOP teaching system is a teaching approach that takes students’ learning needs as the starting point to achieve the purpose of classroom teaching. John Munby (1979) believed that the core of the EOP teaching system is to formulate courses and teaching methods according to the characteristics of the ways, means and ways of language communication in future professional situations. Itemphasizes the importance of target situation analysis. Jordan (1979) believed that professional English revolves around learner needs and professional needs to cultivate learners’ ability to communicate in English in a specific work environment. EOP, for vocational students, their goal of participating in education is to complete the follow-up employment problem. Compared with higher education, higher vocational colleges are related to professional content in terms of their educational content and purposefully provide more professionals for different majors. The establishment of education and teaching content is to cultivate more outstanding talents. In the current education content of higher vocational colleges, vocational ability is the main content. In the education process, employment guidance is provided based on the practical ability of students, and then employment-oriented, professional courses, teaching models, supplementary teaching content, etc. Through the introduction of the EOP model, the teaching content of higher vocational education is more in line with the needs of society and jobs and meets the employment needs of students. In particular, the EOP teaching model emphasizes the introduction of workplace English knowledge. During the teaching process, it can help students build a workplace environment in advance, so that students can complete their own professional competence and quality, to meet the requirements of vocational college students’ ability training. In China, researches on EOP teaching originated in the 1980s. Yang Huizhong (1978) first proposed the concept of ESP. Xu Xiaozhen (2009) constructed the authenticity evaluation system of professional English based on the action system. Ning Shunqing (2012) demonstrated the feasibility and necessity of the professional English teaching model from the perspectives of the laws of language teaching, social
472
W. Zhao
needs, teacher and student development. Ni Yuhong (2013) proposed the EOP shift of English teaching in higher vocational schools based on the relevant links between courses, textbooks, teaching methods, evaluation, teachers and other factors [5–10].
3 Methodology Based on a questionnaire survey of 324 students from Jilin Engineering Normal University, the research conducted a one-year empirical study on two tourism management classes from the perspective of the EOP Teaching Theory and Task-Based Approach. It was conducted using quantitative research methods. The control group and experimental group were set up to adopt the traditional regular teaching mode (hereinafter referred to as mode 1) and the “EGP + EOP” English teaching mode (hereinafter referred to as mode 2). Comprehensive English Test volumes I&II in the same difficulty level, which covered General English and Occupation English, respectively were used before and after the experiment. The research used SPSS19.0 statistical software to examine the achievement test scores of the experimental and control groups twice, and then verify the test whether the hypothesis holds. The overall characteristics of Mode 1 were adopting traditional classroom teaching, acknowledging the teacher’s authoritative role in the classroom teaching process, respecting the teacher’s personal judgment and selection of the key and difficult points of teaching, and emphasizing word formation, sentence patterns and other language structures, difficult sentence analysis and word meaning. The explanation requires students to preview before class and memorize the vocabulary and phrases after the text, find out the difficult sentences in the text, and consolidate them with supplementary example sentences. The students must review after class, unify the teaching progress and difficulty, and pay attention to the language system. Students’ language ability is mainly strengthened through teacher-centered classroom dictation, questioning, explanation, example and analogy, and arranging students to complete tasks such as preview, review, and written assignments. The implementation of Mode 2 must first start with teacher training, and it is necessary to study how teachers can change their teaching concepts, update teaching methods, and continuously learn new occupational knowledge. Research how to organically combine foreign language teaching with the cultivation of applied talents; at the same time, research how to make teachers proficiently use the autonomous learning platform to cultivate students’ autonomous learning ability from multiple angles and levels. The teachers focus on the development of learners’ professional and language skills. Classroom teaching mainly adopts learner-centered teaching methods such as dialogue, group discussion, role playing, and personal presentation. In Mode 2, teachers play the roles of organizer, commander, monitor, manager and decision maker of classroom activities. This experiment aimed to discuss the feasibility and effectiveness of the professional English-oriented English teaching model in higher vocational colleges. The teachers who participated in the experiment used Mode 1 and Mode 2 to teach the research objects for one academic year. After the end of the school year, through the
An Empirical Study of English Teaching Model in Higher Vocational Colleges
473
comparison of various scores, the experimental group and the control group were compared in terms of listening, reading and writing.
4 Result and Discussion 4.1
Pre-test Data Analysis
From Table 1, it is shown that through the independent sample t test, there is little difference between the experimental class and the control class, indicating that there is no significant difference in English performance between the experimental class and the control class. Table 1. Comparison of pre-test data between experimental class and control class Group Number Average Std. Deviation Std. Mean Experimental class 38 57.48 20.081 2.841 Control class 38 57.16 19.762 2.784
4.2
Post-test Data Analysis
At the end of the semester, the English proficiency test of the same difficulty was conducted on the two classes, and the data is shown in Table 2. Table 2. Comparison of post-test data between experimental class and control class Group Number Average Std. Deviation Std. Mean Experimental class 38 70.13 16.083 2.233 Control class 38 58.11 19.502 2.784
The independent sample t test shows that the average value of the experimental class is 70.13, and the average value of the normal class is 58.11. The t-value is 3.040 and the P-value is 0.003 < 0.005, which means that there is a significant difference between the two classes. Through the comparative data analysis of the pre- and post-test of the teaching experiment, conclusions are drawn: First, the vocational college English teaching model oriented by vocational English is feasible and effective in vocational English teaching; second, the vocational English teaching mode integrated with EGP and EOP helps to stimulate students’ interest in professional learning.
474
W. Zhao
5 Conclusion Vocational English-oriented college English teaching mode follows the development direction of higher vocational education and conforms to the laws of English language teaching. On the premise of consolidating basic English, the combination of EGP and EOP will not only help students acquire language knowledge, but also promote their professional skills learning. All in all, simple EGP teaching can no longer meet the demand for compound talents today. Vigorously promoting ESP and EOP teaching is to meet the needs of social development, and it is also the inherent need for higher vocational education to train skilled talents. By promoting the development of ESP and EOP in higher vocational English teaching, we will cultivate more compound foreign language talents with a solid English foundation and practical skills to meet the society’s demand for English compound talents. However, this study also has certain limitations. Due to the limitations of teaching tasks and class hours, vocational English teaching focuses more on the cultivation of communicative competence, and the training of students’ professional English ability is not deep enough. In addition, the number of experimental subjects is small, and there are certain limitations. At the same time, this model poses a challenge to current English teachers in higher vocational schools. Strategies on how to become a solid language foundation and is competent for professional teaching will be a subject that needs further research. Acknowledgements. This work is supported by the 2018 Key Projects of Jilin Province Vocational Education and Adult Education Teaching Reform Research of Jilin Province Educational Commission (Grant No. 2018zcz022) and supported by the Jilin Vocational and Technical Education Association (Grant No. 2018XHY115).
References 1. Shaoqun, C.: Entering the real professional life—the application of EOP model in english education in higher vocational education. J. Shijiazhuang Vocat. Tech. Coll. 05, 15–17 (2004) 2. Hutchinson, T., Waters, A.: English for Specific Purposes: A Learning Centered Approach. Cambridge University Press, Cambridge (1987) 3. Munby, J.: Communicative Syllabus Design. Cambridge University Press, Cambridge (1979) 4. Jordan, R.R.: English for Academic Purposes: A Guide and Resource Book for Teachers. Cambridge University Press, Cambridge (1977) 5. Shunqing, N.: The motivation and path of EOP english teaching model in higher vocational education—taking guangdong higher vocational english teaching reform as an example. Contemp. Foreign Lang. Res. 7, 63–66 (2018) 6. Yuhong, N.: Research on EOP turn of english teaching in higher vocational colleges and its influencing factors. Foreign Lang. World 4, 90–96 (2013) 7. Qianqian, S.: Discussion on the EOP curriculum framework of higher vocational electrical professional english based on EGP + ESP teaching theory. Guangxi Educ. 31, 81–82 (2018)
An Empirical Study of English Teaching Model in Higher Vocational Colleges
475
8. Xiaozhen, X.: On the construction and implementation of the authenticity evaluation system of professional english. J. Shenzhen Vocat. Tech. College 2, 13–19 (2009) 9. Huizhong, Y.: Teaching and research of english for science and technology. Foreign Lang. 3, 72–74 (1978) 10. Xiaorui, W.: A brief talk on situational teaching of english for occupation (EOP) based on constructivist teaching theory. J. Wuhan Inst. Shipbuilding Technol. 04, 119–122 (2012)
Design of Power System of Five-Axis CNC Machine Tool Fuman Liu(&) Jilin Engineering Normal University, Changchun, China [email protected] Abstract. The power system of five-axis CNC machine tool is designed to determine the maximum machining working space with alloy steel material and maximum overall dimension. Based on the analysis and calculation of machining process parameters, the cutting force, cutting speed, power and torque are checked. From the analysis and calculation of material cutting force to the design of power system, the stability and accuracy of machine tool machining can be improved. Keywords: Five-Axis CNC machine tool parameters Design
Power system Processing
1 Introduction With the rapid development of national industry, the processing demand of industrial parts is increasing. Intelligent manufacturing develops from three axis and multi axis. More and more attention has been paid to the ability of five axis CNC machine tools to process complex curved parts. It can not only solve the problem that parts are difficult to process and cannot be processed, but also improve the processing efficiency. So as to promote the development speed of China’s industry. Therefore, it is of great significance to design the power system of the five axis CNC machine tool through the calculation of material cutting force, the selection of motor and the design of the power system [1].
2 Power System Design of Five-Axis CNC Machine Tool 2.1
Calculation of Cutting Force
Cutting power refers to the power of cutting tools under different conditions during the cutting process. It is called cutting power, and the unit is N. Milling the plane of alloy steel (hbs225–325) with carbide milling cutter with number of teeth Z = 3, Milling cutter diameter ; 10 mm, The front angle is 0 , Tool speed n = 1600 r/min, Feed rate vf = 120 mm/min, Cutting depth ap = 1 mm, Milling width B = 6 mm [2]. Check that the reference value of milling speed VC is alloy steel (hbs225–325), according to Table 1, the calculation result of cutting speed VC = 37 * 80 m/min is reasonable within the reference value range. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 476–481, 2021. https://doi.org/10.1007/978-981-33-4572-0_69
Design of Power System of Five-Axis CNC Machine Tool
477
Calculation of cutting speed VC: vc ¼
pd0 n p 6 1600 ¼ ¼ 50:24 m=min 1000 1000
ð1Þ
Where d0—Knife diameter (mm); n—Knife speed (r/min); So VC value is reasonable [3].
Table 1. Cutting speed VC value Work piece material Hardness HBS Milling speed (m/min) Cemented carbide High speed steel 15*35 55*120 Alloy steel 220 10*24 37*80 225*325 5*9 30*60 325*425
The milling materials are alloy steel (hbs225–325) and carbide milling cutter. Check the feed rate of each tooth fz = 0.03 * 0.08 mm/z from Table 2. The calculation result is reasonable within the reference value range. Feed rate of each tooth fz calculation: fz ¼
vf 200 ¼ 0:063 mm=z ¼ z n 2 1600
ð2Þ
Where z—Number of milling cutter teeth; vf-Feed rate. So the fz value is reasonable [4]. Table 2. Feed fz value of each tooth Work piece material
Hardness of work piece material HBS
Alloy steel
220*280 280*320 320*380
Cemented carbide End milling Three face milling cutter cutter 0.10*0.3 0.03*0.08 0.08*0.2 0.05*0.15 0.06*0.15 0.05*0.12
Calculation of cutting force Fc: Fc ¼
cp a0:86 fz0:74 B z p
K K1 10 d00:86 68 10:86 0:0630:74 6 3 1:2 1 10 Fc ¼ 100:86 Fc ¼ 262:1N
ð3Þ
478
F. Liu
Where Cp—Influence coefficient of material on cutting force; From Table 3, Cp = 68; K—Influence coefficient of tool rake angle on cutting force; From Table 4, K = 1.2; K1—Influence coefficient of cutting speed on cutting force; From Table 5, K1 = 1 [5]. Table 3. Influence coefficient of material on cutting force Cp Work piece material Influence coefficient Cp Cylindrical milling cutter 68 Steel 30 Cast iron 22.5 Bronze
End milling cutter 82 50 37.5
Table 4. Influence of tool rake angle on cutting force K Rake angle c0 150 100 50 00 −50 −100 −150 −200 K 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 Table 5. Influence coefficient of cutting speed on cutting force K1 Cutting speed vc(m/min) 50 75 100 125 150 175 200 250 1.00 0.98 0.96 0.94 0.92 0.88 0.88 0.86 K1
Calculation of cutting power Pc: Pc ¼
Fc vc 262:1 50:24 ¼ 0:219kw ¼ 60000 60000
ð4Þ
Where Fc—Cutting force; vc—Cutting speed. 2.2
Selection of Spindle Motor
Considering the simplicity of the whole structure of the machine tool, the electric main shaft head (combination of power motor and clamping tool device) is adopted. The belt drive mechanism is not used for deceleration. Simplify the mechanical structure and facilitate maintenance [6]. From the calculation of cutting power, the cutting power needed in cutting machine tool can be obtained Pc = 0.219 kw, rotation rate n = 1600 r/min. Model selection: sqd80-1.5 l-24 k water-cooled electric main shaft head, meeting the requirements, see Table 6 for specific parameters [7].
Design of Power System of Five-Axis CNC Machine Tool
479
Table 6. Main technical parameters of sqd80-1.5 l-24 k water cooling main shaft head Model of spindle motor SQD80-1.5L24 K
2.3
Power (kw) 1.5
Rotation rate (r/min) 0*24000
Electric current(A) 7
Voltage (V) 220or380
Frequency (HZ) 400
Primary Selection of Feed Servo Motor
The servo motor is directly connected with the screw nut by the coupling, which is driven from the rotating motion to the linear motion. Because the linear motion of the cutting type is slow. A small motor speed is required. When the table needs no load after cutting, the motor reverses. At this time, the reverse speed is faster. In order to ensure the elevator frequent and accurate forward and reverse exchange and speed adjustment. Therefore, the servo motor is selected to control the rotation angle by controlling the pulse time. Realize the accurate positioning of forward and reverse rotation speed. The primary motor model is: vemt servo motor 60st-m01930f parameters are shown in Table 7 [8]. Table 7. Servo motor parameters Motor model 60STM01930F
2.4
Rated power (kw) 0.6
Maximum speed(r/min) 3600
Rated torque (Nm) 1.91
Maximum torque(Nm)
Motor inertia (kgm2)
5.4
0.526 10−4
Selection of Machine Tool AC Axis Mechanism Motor
Selection of C-axis motor for machine tool The design machine tool c axis in the working state of the range of ± 360 ° and C axis speed is not high, so choose the stepping motor. The motor works in proportion to the rotation angle and pulse number. Good start stop and reverse. The accuracy error of each step will not accumulate to the next step. Therefore, it is more reliable to ensure that the position and angle accuracy are not affected by the repeatability of the motion [9]. Torque required for machine tool C-axis operation Ta is Tcaxis = FC RC ¼ 262:1 0:08¼ 20:968 N:m
ð5Þ
Where Tc—Cutting force Tc = 262.1 N; Rc—Radius of disc table Rc = 80 mm = 0.08 m (Not less than this value). The holding force distance of stepping motor Tc axis > Tc axis = 20.968 n meets the requirements. Therefore, the model of 110 series two-phase large torque hybrid stepping motor is hstm110-1.8-s-150-4-6.5 stepping motor. See Table 8 for parameters.
480
F. Liu Table 8. Step motor parameters
Motor model HSTM 110-1.8
Holding torque (Nm) 21
Talus angle(º) 1.8
Current/phase (A) 6.5
Motor weight(kg) 8.4
Fuselage Length(mm) 150.4
Selection of A-axis motor for machine tool There is limited working space for designing axis a of machine tool. Select servo motor to connect worm reducer. Make the mechanical structure more compact and smaller. The working range is 30°* 120°. So model: VEMT servo motor 60stm01330f. When the a-axis of machine tool works, the required power PA is: PA ¼
MAC g vAC 10 9:8 0:837 ¼ 0:082 kw ¼ 1000 1000
ð6Þ
Where MAC—It is estimated that the mass of C-axis C and work piece is 10 kg (greater than the actual value); vAC—Feed rate estimate vAC = vc = 50.24 m/min = 0.837 m/s. The power PA required for machine tool A-axis motor operation is: PAtotal ¼
Pc + PA 0:219 þ 0:082 ¼ 0:37625 kw ¼ 0:8 g
ð7Þ
Where η—Rotating efficiency of worm gear reducer η = 0.7*0.9, let η = 0.8; Pc—Cutting power [10]. The rated power of servo motor P > PA = 0.37625, so the model selection is: the vemt servo motor 60st-m01330f meets the requirements, and the parameters are shown in Table 9. Table 9. Servo motor parameters Motor model 60STM01930F
Rated power (kw) 0.4
Maximum speed(r/min) 3600
Rated torque (Nm) 1.27
Maximum torque(Nm)
Motor inertia (kgm2)
3.81
0.407 10−4
3 Conclusion The power system of multi axis machine tool is designed in detail. The machine covers a small area and has stable processing. Based on the analysis and calculation of machining process parameters, from material cutting force analysis to system design,
Design of Power System of Five-Axis CNC Machine Tool
481
the cutting force, cutting speed, power and torque are checked. Optimize the structural parameters to improve the stability and accuracy of machine tools.
References 1. Lianggui, P., Guoding, C.: Mechanical Design, pp. 64–66. Higher Education Press, Beijing (2013). (in Chinese) 2. Engineering graphics research office of Dalian University of technology.: Mechanical Drawing, pp. 3–10. Higher Education Press, Beijing (2013) 3. Xianze, X., Shuhua, D.: Fundamentals of Precision Machinery Design, 3rd edn. Electronic Industry Press, Beijing (2015). (in Chinese) 4. Slocum, A.H., Jianhua, W.: Precision Machine Design. Machine Press, Beijing (2017) 5. Shuzi, Y.: Manual of Machinist. China Machine Press, Beijing (2002). (in Chinese) 6. Haishan, S.: Practical Milling Manual. Shanghai Science and Technology Press, Shanghai (1982). (in Chinese) 7. Heng, S., Zuomo, C.: Mechanical Principle. Higher Education Press, Beijing (2013). (in Chinese) 8. Gulyaev, P.V., Shelkovnikov, Y.K.: High-accuracy Inertial Rotation. Russ. Electr. Eng. 18 (10), 521–523 (2010) 9. Pan, F., Bai, Y.: The measurement for rotary axes of 5-axis machine tools. Shanghai Second Polytechnic University, Industrial Engineering (2016). (in Chinese) 10. Jywe, W.-Y., Liu, C.-H.: Development of a Measurement System for Five-Axis CNC Machine Tools. National Formosa University (2010)
Design of Electrical Control System for Portable Automatic Page Turner Xiurong Zhu1(&) and Xinyue Wang2 1
2
Jilin Engineering Normal University, Changchun, China [email protected] Changchun Normal University, Changchun, Jilin, China
Abstract. The design of this portable automatic page flipper takes portability and cheapness as the design goal. It uses simple mechanism and control mode to complete the design of this page flipper. Besides meeting the function of the page flipper, it also uses automatic control circuit control. The area covering the pages is very small. The overall design is foldable and can turn pages quickly. It can work either by plug-in mode or by battery power supply mode. The wireless remote control mode is designed to facilitate the user’s long-distance use. Keywords: Portable Design
Automatic page turner Electrical control system
1 Introduction With the progress of the times and the development of science and technology, people have a higher demand for reading paper books. It’s not convenient for the wounded whose hands are bound to turn over the books. The existing domestic and foreign automatic book flipping devices, some of which realize the function of book flipping by mechanical control, some of which are realized by electromechanical control, but the control circuit is relatively complex. Therefore, on the basis of the existing technology, the design of a simple and compact structure, simple automatic control circuit, foldable and easy to carry, low price pager will have a very promising market prospect [1].
2 Electrical Control Circuit Design 2.1
Motor Selection
As the power source of the mechanism, the motor affects the normal operation and working efficiency of the mechanism. In this design, because the force object of the terminal is the page, and the material of each component of the actuator is engineering plastic, the size is also small, the mass is very light, the required power is not large, in addition, the movement speed of the mechanism cannot be too large, so the output speed should be controlled at about 5r/min. Therefore, we choose the 60KTYZ series micro AC synchronous reducer produced by Changzhou Haisheng Electric Corporation [2] (Table 1). © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 482–487, 2021. https://doi.org/10.1007/978-981-33-4572-0_70
Design of Electrical Control System
483
Table 1. Technical parameters of 60KTYZ series AC synchronous reducer Motor model 60KTYZ
2.2
Input power (W) 3
Synchronous speed (r/min) 6
Output torque (Kg.cm) 1
Rated voltage (V) 24
Deceleration ratio 1:85.25
Problems in the Design of Electrical Control System
In the design of electrical control, much attention should be paid too many experiences summed up by the designers, users and maintainers in the long-term practice to make the design circuit simple, correct, safe, reliable, reasonable in structure and convenient in use and maintenance. 2.3
Electrical Control Circuit Design
The electric control circuit is mainly used to control the power on and off, start and stop of the motor. In this design, there are four motors: left and right paper take-up mechanism and left and right press page mechanism. The on-off, start-up, stop, forward and reverse rotation of each motor power supply are all controlled by the electrical system, and the stroke of each mechanism is controlled by the limit switch. The circuit element list is shown in Table 2 [3]. Table 2. Function list of electrical components Name M1 M2 HL SB1 SB2 SB3 KM1 KM2 KM3 KM4 QS FU
Function Left page turning motor Right page turning motor Power indicator Main power button Left page turning motor button Right page motor button Left flip motor relay Right flip motor relay
Name M3 M4 SQ1 SQ2 SQ3
Left press page motor relay Right press page motor relay Isolating switch Fuse
SQ7 SQ8
Function Left press page motor Right press page motor Left reset limit switch Power on switch of left press page motor Power on switch of right pressing page motor Left press page limit switch Right reset limit switch Power on switch of right pressing page motor Power on switch of left press page motor Right press page limit switch
FR
Thermal relay
SQ4 SQ5 SQ6
The layout of limit switch is shown in Fig. 1.
484
X. Zhu and X. Wang
Fig. 1. Layout of limit switch
The electrical control diagram is shown in Fig. 2.
Fig. 2. Electrical control diagram
Electrical control principle: (1) Start and stop of left page turning motor M1 Close button SB2, coil KM1 is energized, and its normally open main contact and auxiliary contact KM1 are closed at the same time, so that the left flip motor M1 rotates. When the stop on the flip crank hits travel switch SQ1, the normally closed contact SQ1 of the travel switch is disconnected, coil KM1 is de energized, the normally open main contact and auxiliary contact are disconnected, motor M1 is de energized, and rotation stops. In order to prevent misoperation, the left and right page turning motors adopt the interconnection restriction mode. When one side of the motor is powered on, the other side of the motor circuit is disconnected, even if the button is pressed, it cannot be started [4]. (2) Start and stop of right page turning motor M2 Close button Sb3, coil km2 is energized, and its normally open main contact and auxiliary contact km2 are closed at the same time, so that the left flipping motor M2 rotates. When the stop on the flipping crank impacts the travel switch SQ5, the normally closed contact SQ5 of the travel switch opens, coil km2 loses power, the normally open main contact and auxiliary contact open, motor M2 loses power, and stops rotating [5].
Design of Electrical Control System
485
(3) Start and stop of left page pressing motor M3 When the left page turning motor starts to rotate, it drives the crank slider mechanism to move. After the crank rotates for a certain angle, the stop on the crank impacts the travel switch SQ2, the normally open contact SQ2 of the travel switch is closed, and the coil KM3 is powered on. Its normally open main contact and auxiliary contact KM3 are closed at the same time, so that the left press page motor M3 rotates. When the stop on the press page crank impacts the travel switch sq4, the normally closed contact sq4 of the travel switch is open, the coil KM3 is powered off, and the normally open main contact and auxiliary contact sq4 are open Disconnect, motor m3 loses power and stops rotating [6]. (4) Start and stop of right page pressing motor M4 When the left page turning motor starts to rotate, it drives the crank slider mechanism to move. After the crank rotates for a certain angle, the stop on the crank strikes the travel switch SQ3, the normally open contact SQ3 of the travel switch is closed, and the coil km4 is powered on, and its normally open main contact and auxiliary contact km4 are closed at the same time, so that the right press page motor M4 rotates. When the stop on the press page crank strikes the travel switch SQ8, the normally closed contact SQ8 of the travel switch is open, the coil km4 is powered off, and the normally open main contact and auxiliary contact SQ8 are open Disconnect, motor M4 loses power and stops rotating [7].
3 Design of Power Supply Mode 3.1
Plug in Operation Mode
In this design, 24 V DC micro motor is selected as the power source. When plugging in the power supply, just plug the charger into the power supply hole of the book flipper to work. Referring to the charger and power hole of the relevant reader in the market, the charger and power hole of the reader are designed. The power line of the left and right page turning motor is connected with the power hole through the button switch, and the power line of the left and right page pressing motor is connected with the power hole through the travel switch. The wiring diagram is shown in Fig. 3 [8].
Fig. 3. Schematic diagram of plug-in wiring
486
3.2
X. Zhu and X. Wang
Battery Powered
There is a battery box on the back of the book flipper. Just put the battery in according to the indicated positive and negative poles to replace the plug-in work. The wiring and plug-in operation mode of motor and battery are the same [9] (Fig. 4).
Fig. 4. Battery wiring diagram
4 Infrared Remote Control Because the infrared remote control device has the characteristics of small size, low power consumption, strong function and low cost, so in this design, the infrared remote control switch mode is used. The general infrared remote control system consists of two parts: transmitting and receiving. The special integrated circuit chip for encoding/decoding is used for control operation, The transmitting part includes keyboard matrix, code modulation and LED infrared transmitter; the receiving part includes optical and electrical conversion amplifier, demodulation and decoding circuit [10].
5 Conclusion The design of the automatic page flipper takes the portability and cheapness as the design goal, uses simple mechanism and control mode to complete the design of the page flipper, and realizes the characteristics of cheapness and portability. In addition to meeting the basic functions of the book flipper, it also has the following characteristics: (1) Use automatic control circuit control. Users only need to use three buttons to operate the book flipper, which are front page turning, back page turning and emergency power-off. (2) The size of the main working mechanism is almost all hidden in the frame, covering a small area of the pages. The overall design is a foldable style, so that the folded page of the book flipper is slightly larger than that of A4 paper, and becomes a small box after folding. (3) Can turn pages quickly. (4) It has good adaptability to the size of the page, and can turn the page with the width between 8–15 cm. It can work either by plug-in or by battery. (5) The wireless remote control mode is designed, which is convenient for users to use in a long distance.
Design of Electrical Control System
487
References 1. Mingyue, C.: Research on motion control of uncertain wheeled mobile robot in complex environment. Chongqing University (2012). (in Chinese) 2. Han, L.: Research on open robot motion controller based on. DSP+FPGA South China University of Technology (2013). (in Chinese) 3. Yongqian, Z.: Design of embedded motion controller based on ARM and motion control chip. Harbin Engineering University (2013). (in Chinese) 4. Zhou, Z.: Design and experimental research of robot multi axis motion controller based on FPGA. South China University of Technology (2013) (in Chinese) 5. Chunhui, W.: Design of embedded motion control system for four axis robot. Zhejiang University of Technology (2014). (in Chinese) 6. Kan, W.: Design of automatic book reader. Mech. Des. 25, 317–319 (2010). (in Chinese) 7. Zongze, W., Tianjue, L.: Practical Handbook of Mechanical Design. Chemical Industry Press, Beijing, 88–92, 166–167, 878–879 (2010). (in Chinese) 8. Xianze, X., Shuhua, D.: Basics of Precision Machinery Design. 3rd (edn.) Electronic Industry Press, Beijing, (2015). (in Chinese) 9. Pan, F., Bai, Y.: The measurement for rotary axes of 5-axis machine tools. Shanghai Second Polytechnic University, Industrial Engineering (2016). (in Chinese) 10. Jywe, W.-Y., Liu, C.-H.: Development of a measurement system for five-axis CNC machine tools. National Formosa University (2010). (in Chinese)
Development Trend and Content Status of Physical Education Undergraduate in Normal Universities Based on Big Data Hui Du, Ji Zhu, and Ke Sun(&) School of Physical Education, Yunnan Normal University, Kunming 650500, Yunnan, China [email protected]
Abstract. With the development of the times, under the guidance of mobile cloud Internet, computer network and other high-tech technologies, education gradually gets rid of the traditional shackles of teaching mode, and also gets rid of the limitations of traditional teaching mode in space and time, and develops intelligent thinking. In this paper, the author introduces the current situation of physical education (PE) in Colleges and universities (CAU) by using the method of statistics, which is based on the data of compulsory PE in CAU At the same time, it puts forward some suggestions for the improvement of sports CAU in China. The experimental results show that the development trend of PE undergraduate in normal universities in China has been greatly improved, the enthusiasm of students for PE has increased by 13%, and PE is developing towards a better and better direction. Keywords: Big data Physical education in normal universities Cognition of teaching skills
1 Introduction How PE should be evaluated, by whom, and what is the content of the evaluation has a direct impact on the improvement of the quality of college PE and the progress and development of PE teachers and students [1]. The current evaluation of PE teaching in CAU is mainly based on the students’ evaluation of teaching on the Internet at the end of the semester. The students score according to the teaching situation of the PE teachers and the teaching evaluation items formulated by the school, and finally get the PE teaching scores of the PE teachers in this semester. This model there is a serious deviation, which is one of the factors that lag behind the development of PE reform in China’s CAU [2]. With the rise of big data (BD) applications, it is possible to conduct automated PE evaluations in CAU. BD can provide a large amount of data support for the evaluation of PE, making the evaluation of PE more scientific and fair; at the same time, driven by the BD, the evaluation of PE can provide more timely feedback of the results of PE evaluation [3]. BD has challenged the current PE evaluation system in terms of technology, system, and system. Therefore, it is explored to build a university that meets © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 488–493, 2021. https://doi.org/10.1007/978-981-33-4572-0_71
Development Trend and Content Status of Physical Education Undergraduate
489
the requirements of the BD era, is reliable and operability, and truly promotes the development of students and PE teachers. The PE evaluation system is very necessary [4, 5]. The innovation of this article is to explore the construction of the college PE evaluation system in Shanxi Province from the perspective of the background of BD application, comprehensively use the relevant knowledge of BD and PE evaluation, and explore how to build a system that meets the requirements of the BD era. College PE teaching evaluation system [6].
2 BD and PE in Normal Universities 2.1
BD Cloud Computing Technology
BD can effectively record and retrieve data and apply it to enterprise operations. At the same time, it can derive more important information from a large amount of complex data. Cloud computing technology is a computer model that can provide corresponding shared computer services according to user needs. The cloud browser can conveniently provide the applicant with the corresponding network service, and can store the corresponding function data from the server [7]. The combination of BD and computing cloud can not only build a platform for energy-saving and efficient information systems, but also allow operational information infrastructure to develop in a flexible direction [8]. The great computer cloud technology has its own advantages [9]. Large-scale cloud computing technology can provide more ways to obtain data, and can identify many valuable data information data. Compared with traditional data processing methods, this technology has more reliable and safer features and advantages. Effectively avoid problems such as user data loss. Large-scale cloud computing technology is currently able to provide general services, and customers have low requirements for equipment and it is more convenient to use. This technology is based on the Internet, and it can easily utilize data between different devices [10]. 2.2
PE in Normal Universities
(1) Development trend, The development of modern science and technology and production is characterized by systematization and intensification, which is reflected in the integration of curriculum in higher education. Therefore, in the future, the discipline system and curriculum setting of the sports major will stand at the height of the entire sports science, form its own unique discipline curriculum system based on the mutual penetration and coordinated development of theoretical and technical disciplines, and break away from traditional professional concepts (such as Breaking the model of setting disciplines by sports events, etc.), the curriculum will be developed from a vertical and deep type to a horizontally broad type, using the theories, methods and advanced technology of the discipline to organize the implementation and effectively control the whole process of technical and theoretical discipline construction. (2) Status of content, The level of popularization of PE in my country is low, and the methods of
490
H. Du et al.
education and scientific research are backward. The investment in education in our country is low. Teachers in elementary education are trained by higher normal schools. The educational theory of such institutions is outdated and the content is single. From the perspective of educational research methods, we have long been accustomed to the observation and description of educational phenomena, experience summaries, and the level of qualitative analysis of written materials. There is a lack of long-term systematic educational scientific experimental research, so there is no organic combination of qualitative and quantitative analysis. There is a lack of important scientific basis of universal significance in the exploration of certain aspects of educational science.
3 The Experimental Framework of College PE Professional Courses Curriculum classification: The courses are divided into three basic compulsory courses. Students must master these courses according to their professional training goals, a limited selection of courses determined according to different educational requirements, and elective courses that cover student interests. The ratio of the three is 6:3:1. This classification method basically divides the learning system into required courses and elective courses. Public and professional compulsory courses and compulsory courses are mainly required courses, special courses are mainly elective courses, limited courses and arbitrary courses. This classification method is suitable for teaching management and is also adopted by most universities. There are also CAU that classify courses by department according to the actual situation, and divide the learning plan system into political department education, theoretical education, human biology, sports practice and method tools. There are also CAU that divide the courses into general courses, basic courses, vocational education courses, and compulsory courses. Talent training: The curriculum system still reflects the characteristics of “one specialization, multiple abilities”. Under the premise of mastering the basic knowledge, students can take elective courses according to their employment direction and hobbies after graduation to meet the needs of individual development. In particular, some popular and group-based courses continue to appear, which has injected new connotations into the employment of college PE majors. A few CAU took the lead in offering cross-professional elective courses within the planned academic hours, which promoted the further expansion of students’ knowledge and thinking style.
4 Analysis of the Development of College PE Based on BD 4.1
Analysis of College PE Curriculum
The ratio of compulsory and elective courses in my country’s PE curriculum is 72% to 28%, and technical courses account for 58% and 42% of working time respectively. In the research of the top 7 CAU, public compulsory education accounted for 32%, educational courses accounted for 6%, the combined accounted for 38%, vocational courses accounted for 62%, disciplines and technical courses accounted for 63% and
Development Trend and Content Status of Physical Education Undergraduate
491
37%, respectively, accounting for total teaching 45% and 27% of the class hours are basically in line with the guidance plan, but the research results of 43 universities across the country are quite different. Among them, as many as 67% of the compulsory courses of ordinary classes, and the lowest 33%, the difference between the two is nearly doubled. The highest is 29%, the lowest is 2%, and the highest is 14 times the lowest. The purpose of formulating the national guidance plan is to build a basic framework for the training of sports professionals. Universities can adapt flexibly according to social needs and regional economic and cultural characteristics. However, the data shows that the reform and development of my country’s PE curriculum system are extremely unbalanced, and some CAU are clearly behind in reforms. 4.2
Cognitive Analysis of College PE Skills
It can be seen from Table 1 that in terms of cognition of the importance of general teaching skills, Mandarin (93%) and language expression ability (88%) rank second, indicating that most students can clearly recognize that teachers should have general teaching skills in class. The importance of the teacher’s teaching attitude refers to the attitude of the teacher in the learning process. It is a comprehensive manifestation of the teacher’s expression system and the basic teaching state that all normal students need to present during the teaching.” 1 Most students regard it as an important Teaching skills. 17% and 5% of students think chalk writing ability and teaching design ability are not important, which may be related to the special nature of PE. Table 1. Cognition of PE in CAU Mandarin Chalk ability language expression Instructional design Educational organization
Important 93 43 88 83 81
General Unimportant 6 1 39 17 10 2 12 5 16 3
Visualize the table to get a line chart, as shown in Fig. 1. Through the analysis of the data, it can be seen that the most basic quality of the teaching cognition of PE in CAU still exists, and there is only one person out of 100 students. 93% think Mandarin teaching is not important, and 93% think it is very important. In addition, PE is more important in terms of teaching organization ability, but only 81% of students think teaching organization and management ability is important, and 16% of students think it is Generally important.
492
H. Du et al.
Fig. 1. Cognitive analysis of college PE teaching
4.3
Analysis of College PE Teaching Evaluation Based on BD
This article summarizes the evaluations of 100 students on college PE in the era of BD. Through analysis, it can be seen that among the 100 students, 36 students are very satisfied with college PE in the era of BD, accounting for 41% of the total. %, and 45 people are generally satisfied with college PE in the era of BD, accounting for 45% of the total. Among them, 8% are unsure about college PE in the era of BD. In the era of BD, only 4% expressed dissatisfaction with college PE. Finally, only 2% expressed dissatisfaction with college PE in the era of BD. The specific results are shown in Fig. 2:
Fig. 2. Evaluation of college PE in the era of BD
Development Trend and Content Status of Physical Education Undergraduate
493
5 Conclusions Due to the constraints of time, energy, actual scientific research level and scientific research funding, the survey objects of this article have selected the PE students in normal universities in China. The survey results show the use of the national normal PE undergraduate professional PE compulsory course textbooks, and college PE. The development trend and current content of teaching require further, deeper and wider investigations. The talent training program reflects the concept and standard of talent training. PE majors in CAU have been committed to the cultivation of “one specialization, multiple abilities” talents, with clear training objectives and accurate positioning, but there are still problems such as insufficient innovation, insufficient distinctive features, and insufficient manifestation of the professional development of PE teachers, which need to be further improved.
References 1. Wang, Y., Kung, L.A., Byrd, T.A.: Big data analytics: understanding its capabilities and potential benefits for healthcare organizations. Technol. Forecast. Soc. Change 126, 3–13 (2018) 2. Athey, S.: Special issue perspective beyond prediction: using big data for policy problems. Sci. 355(6324), 483–485 (2017) 3. Janssen, M., Haiko, V.D.V., Wahyudi, A.: Factors influencing big data decision-making quality. J. Bus. Res. 70, 338–345 (2017) 4. Rathore, M.M.U., Paul, A., Ahmad, A., et al.: Real-time big data analytical architecture for remote sensing application. IEEE J. Sel. Top. Appl. Earth Observations Remote Sens. 8(10), 4610–4621 (2017) 5. Xu, L., Jiang, C., Wang, J., et al.: information security in big data: privacy and data mining. IEEE Access 2(2), 1149–1176 (2017) 6. Mei, J., Moura, J.M.F.: Signal processing on graphs: causal modeling of big data. IEEE Trans. Sign. Process. 65(8), 2077–2092 (2017) 7. Zhang, Y., Qiu, M., Tsai, C.W., et al.: Health-CPS: healthcare cyber-physical system assisted by cloud and big data. IEEE Syst. J. 11(1), 88–95 (2017) 8. Zhou, L., Pan, S., Wang, J., et al.: Machine learning on big data: opportunities and challenges. Neurocomputing 237(MAY10), 350–361 (2017) 9. He, Y., Yu, F.R., Zhao, N., et al.: Software-defined networks with mobile edge computing and caching for smart cities: a big data deep reinforcement learning approach. IEEE Commun. Mag. 55(12), 31–37 (2017) 10. Cai, H., Xu, B., Jiang, L., et al.: IoT-based big data storage systems in cloud computing: perspectives and challenges. IEEE Internet Things J. 4(1), 75–87 (2017)
Image Enhancement of Face Recognition Based on GAN Zhiliang Zhang(&) and Tianfang Dong Experiments and Training Center, Dalian Neusoft University of Information, Dalian City 116023, Liaoning Province, China [email protected]
Abstract. Face recognition technology has attracted attention as people pay more and more attention to facial information, and has become a hot research topic. In this experiment, the training learning rate is set to 0.0004, 64 images are randomly loaded in a single loop, and then noise is added to start training. The cyclic process can be described as: First, the generator G generates output, and then the discriminator D performs the discrimination, and the generation loss and the discrimination loss are calculated by the output of the generator G and the discriminator D. Experimental data shows that a face database is formed through feature extraction and training of the face database. Then, randomly extract images for detection and recognition, and finally match the features in the feature library. The experimental results show that the accuracy of face recognition is 88.11% when the original data is used for 1000 iterations. The data filled with samples generated by GAN is used for training, and the accuracy of face recognition is 93.76%. There is a significant increase. At present, face verification and recognition still have difficulties in the application of computer science, and the generative confrontation network has made certain breakthroughs in the description of image generation. Keywords: GAN technology Face recognition image confrontation generation network
Dual-path
1 Introduction Facial image recognition is a kind of biometric technology that uses facial features as identity recognition. In the digital information era, with the increasingly strict identification and the development of digital image recognition technology, facial recognition technology has gained more and more attention [1, 2]. At present, face recognition technology mainly includes two types of methods based on feature analysis and based on overall analysis. Face recognition involves many subjects such as mathematics, biology, computer vision and brain science, and the research methods are also complex and changeable. This paper uses GAN to learn the field of face recognition, and conducts tentative research on it [3, 4]. Good fellow et al. proposed the GAN model. The GAN model uses simple back propagation and dropout algorithms in the training process. It does not require approximate inference processes such as complex Markov chains. It uses a distribution © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 494–500, 2021. https://doi.org/10.1007/978-981-33-4572-0_72
Image Enhancement of Face Recognition Based on GAN
495
for direct sampling without pre-modeling., Which simplifies the training process to a certain extent [5]? However, the disadvantage of this kind of modeling is that it is too free. For larger pictures and more pixels, the simple GAN-based method is not controllable, and the generated data is quite different from the original data. Mirza et al. proposed the CGAN model, which is a GAN model with conditional constraints. Conditional variables y are introduced in the modeling of generator G and discriminator D, and additional information y is used to add conditions to the model to guide data generation In the process, with the addition of additional information, CGAN turns the purely unsupervised GAN into a supervised model to improve the experimental effect, but the experimental results still have various problems such as instability, blur, and spots [6]. In this article, the confrontation network is used for image generation to achieve the purpose of face recognition image enhancement. Different from the traditional method, the traditional method uses some image processing, adding some noise or changing the angle of the image to generate a new image [7, 8]. When using the generative adversarial network to generate, instead of doing a little linear transformation, adding a little noise or rotation angle on the original image, it generates some images that did not exist before, such as those without expressions. After processing the images, the expressions of those images become sad, while the other parts of the images have not changed. Generally speaking, the larger the amount of data, the better the model can teach the laws of the data, and the better the effect [9, 10].
2 Research on Image Enhancement of Face Recognition Based on GAN 2.1
The Meaning of Generating a Confrontation Network
There are two models for Generative Adversarial Networks (GAN for short), one is a generative model (also called a generator), and the other is a discriminant model (also called a discriminator). The data generated by the generative model passes the discriminant model to judge, the result obtained is fed back to the generative model, the generative model is changed again, and new data is generated and sent to the discriminant model, and the cycle is repeated until the discriminator can't judge the data of the generator, and the balance is reached. 2.2
Technical Background of Dual-Path Confrontation Generative Network
Nowadays, computer vision algorithms are better than humans in solving the problem of facial recognition with multiple benchmark data sets, but in actual application scenarios, the recognition problem caused by gestures has not been solved well. In human visual processing, the overall facial structure is usually inferred based on the observed facial contours, with emphasis on facial details, such as the shape and structure of facial features and the location of facial features, and these facial details are depicted in a facial structure map on.
496
Z. Zhang and T. Dong
The generative model occupies an important position in unsupervised deep learning. It can describe the distribution characteristics of sample data by learning the essential characteristics of real data, and generate new data similar to training samples. Generative Adversarial Network (GAN) is a kind of generative model, including a generative model and a discriminative model. The model is trained to fix one party, update the parameters of the other party, alternate iterations, maximize the error of the other party, and finally estimate Sample image repair can perform semantic-level repair based on the surrounding area of the missing area. Compared with the traditional computer nearest neighbor method, the scene calculation method provides better features. 2.3
Research on Dual-Path Confrontation Generation and Face Recognition
Based on the human visual cognition process, combined with the deep learning frontier confrontation generation network (GAN) theory, a dual-path confrontation generation network based on global and local perception is proposed. One path focuses on reasoning about the global topology, and the other focuses on for local texture reasoning, the two paths generate feature maps respectively. One of these two sets of features is responsible for global structure generation, and the other is responsible for processing local detail generation. The discriminator graph fusion is used for the final fusion of face images. In addition, by introducing the distribution information of frontal facial features into the generation confrontation network framework, it plays a very good constraint in the recovery process, so as to ensure that the generated facial images are realistic and natural, and based on the natural mirror symmetry of the face Features. The introduction of a symmetric principal component analysis algorithm into the model shows that feature selection is performed under interference conditions such as different viewing angles and lighting, which can enhance face recognition and significantly improve the recognition rate. In the process of integrating the relationship between the overall and local features of the face, through the pre-trained face feature extraction network, the subspace face recognition algorithm that combines the global and local features is used in the compressed feature space, and the principal component analysis is used to extract the person. Face global features. Aiming at the local features of the face, an automatic weighting algorithm based on the degree of feature deviation of each part of the sub-block is proposed. Finally, based on the principle of fuzzy synthesis, the global and local data are fused, and the global and local complementary information of the face image are combined. Given the final recognition result, while realizing realistic and high-definition frontal face synthesis, it significantly improves the face recognition rate. 2.4
Generate a Model of the Confrontation Network
According to the concept of GAN, it can be known that D is essentially a binary classifier, so you can fix G first and optimize D. The same as most binary classification models based on the sigmoid function, the process of optimizing the discriminator D is
Image Enhancement of Face Recognition Based on GAN
497
actually The cross entropy is continuously minimized, so its objective function can be written as the following formula: 1 1 LD ðD; GÞ ¼ E ½log Dð xÞ ½logð1 DðGðzÞÞÞ 2 2
ð1Þ
Among them, G (z) represents the data generated by the generator and represents the discriminator, and D () represents the probability. However, in the actual optimization of D, it is different from the previous two-class model, because the data of D is not only derived from the real data sample pdata (x), but also related to the data generated by G. In order to minimize the formula (1) So as to obtain the optimal solution, here we adopt the form of integral in space, and express formula (1) as formula (2): 1 L ðD; GÞ ¼ 2
Z
D
1 x pdataðxÞ log Dð xÞdx 2
Z z pzðzÞ
logð1 DðGðzÞÞÞ
ð2Þ
3 GAN-Based Face Recognition Image Enhancement Experiment 3.1
Experimental Steps
Using deep convolution to generate a confrontation network for data enhancement requires three steps. The first step is to collect raw data samples, the second step is to build a training environment, and the third step is to train and generate data based on raw data through GAN. 3.2
Model Training
The training learning rate of this group is set to 0.0004, 64 images are randomly loaded in a single cycle, the original size of the image is 224 224, during the program operation, the center of these images is intercepted and scaled to 48 48 pixels, then add noise to start training. The cyclic process can be described as: First, the generator G generates output, and then the discriminator D performs the discrimination, and the generation loss and the discrimination loss are calculated by the output of the generator G and the discriminator D. Finally, the weight parameters are optimized through the back propagation algorithm, and then the next cycle is started, and the test image is output every 25 cycles. 3.3
Face Feature Extraction
First, the image is filtered through preprocessing, such as simple histogram filtering, and facial feature information is simply obtained. For example, the gray value of eyebrows is generally darker than that of the surrounding skin. In the same way, the eyes, nose, and mouth can also be obtained from this. At the same time, the gray level
498
Z. Zhang and T. Dong
threshold can be adjusted to make the detected feature more prominent than the gray level of the surrounding environment. Facial localization is the basis and premise of face recognition. Before recognition, it is necessary to find the location of the face from the face image. After feature extraction, the feature vector of the face is shown in Table 1, and the unit of the feature vector data is pixel. Table 1. Feature vector data of human face Characteristic data Pupillary distance Inner eye width Outer eye width Nose width Mouth width
Image 58 54 32 46 39
4 Discussion on GAN-Based Face Recognition Image Enhancement (1) After the facial features are classified, input the obtained results for training. This experiment uses the Yale face database because it contains multiple face images of the same person with different expressions. Through the feature extraction and training of the face database, a face database is formed. Then, randomly extract images for detection and recognition, and finally match the features in the feature library. In order to verify the effectiveness of the GAN model method in this paper, a comparative experiment was done with the traditional matching method. The matching results of the traditional method and the experimental results of the GAN matching method are shown in Table 2 and Fig. 1. Table 2. Comparison of results between traditional methods and GAN methods Match rate (%) Number of images Traditional method 24 87.6 48 88.1 64 89.4
Time-consuming (s) GAN method Traditional method GAN method 93.2 34 28 93.5 38 30 94.1 44 32
(2) After the neural network model is built, use the original data and the data filled with samples generated by GAN for training, and compares the results. When iterating 1000 times, using raw data for training, the accuracy of face recognition is 88.11%. Using data filled with samples generated by GAN for training, the accuracy of face
Image Enhancement of Face Recognition Based on GAN
499
Fig. 1. Comparison of results between traditional methods and GAN methods
Fig. 2. Comparison of accuracy between original data and GAN data
recognition is 93.76%, and the accuracy rate has increased significantly. The accuracy comparison between the original data and the GAN data is shown in Fig. 2.
5 Conclusion The generated confrontation network can not only synthesize the frontal view based on the side face image, but also the synthesized view retains the identity features on the basis of lifelikeness. To deal with the inherent information loss problem in image dimension conversion, the data distribution obtained from the confrontation training is combined with the face domain knowledge to more accurately restore the missing information. In this experiment, generator G generates output first, and then discriminates by discriminator D. The output of generator G and discriminator D is used to calculate generation loss and discrimination loss. The weight parameters are optimized through the back propagation algorithm, and then the next cycle is started, and the test image is output every 25 cycles. The experimental results show that the accuracy of
500
Z. Zhang and T. Dong
face recognition is 88.11% when the original data is used for 1000 iterations. The data filled with samples generated by GAN is used for training, and the accuracy of face recognition is 93.76%. There is a significant increase. As a generative model, GAN does not directly estimate the distribution of data samples, but uses model learning to estimate the potential distribution and generate new samples with the same distribution, which has great application value in the field of image and visual computing. Acknowledgements. Natural Science Foundation of Liaoning Province 2019-MS-013.
References 1. Piprek, J.: What limits the efficiency of high-power InGaN/GaN lasers? IEEE J. Quant. Electron. 53(1), 1–4 (2017) 2. Elsner, J., Jones, R., Sitch, P., et al.: Theory of threading edge and screw dislocations in GaN. Phys. rev. lett 79(19), 3672–3675 (2016) 3. Zhao, Q., Xiong, Z., Luo, L., et al.: Design of a new two-dimensional diluted magnetic semiconductor: Mn-doped GaN monolayer. Appl. Surf. Sci. 396(PT.1), 480–483 (2016) 4. Mcgroddy, K., David, A., Matioli, E., et al.: Directional emission control and increased light extraction in GaN photonic crystal light emitting diodes. Appl. Phys. Lett. 93(10), 1035021– 1035023 (2016) 5. Heaton, J.: Ian Goodfellow, Yoshua Bengio, and Aaron Courville: Deep learning. Genet. Program Evolvable Mach. 19(1–2), 305–307 (2018) 6. Rafieizonooz, M., Mirza, J., Salim, M.R., et al.: Investigation of coal bottom ash and fly ash in concrete as replacement for sand and cement. Constr. Build. Mater. 116(jul.30), 15–24 (2016) 7. Nan, L.D., Rui, H., Qiang, L., et al.: Research on fuzzy enhancement algorithms for infrared image recognition quality of power internet of things equipment based on membership function. J. Vis. Commun. Image Represent. 62(JUL.), 359–367 (2019) 8. Wang, Y., Ding, H., Shang, Y., Shao, Z., Fu, X.: Improved anchored neighborhood regression enhancement for face recognition. J. Shanghai Jiaotong Univ. (Sci.) 23(05), 10– 16 (2018) 9. Kumar, S., Pant, M., Ray, A.K.: DE-IE: Differential evolution for color image enhancement. Int. J. Syst. Assur. Eng. Manag. 9(3), 577–588 (2018) 10. Zhang, L., Tjondronegoro, D., Chandran, V., et al.: Towards robust automatic affective classification of images using facial expressions for practical applications. Multimedia Tools Appl. 75(8), 4669–4695 (2016)
Design of Western Yugur Language Speech Database Shiliang Lyu(&), Fan Bai, and Lin Na Key Laboratory of China’s Ethnic Languages and Information Technology of Ministry of Education, Northwest Minzu University, Lanzhou, Gansu, China [email protected]
Abstract. The Yugur nationality is one of the ethnic groups with a small population in China, and the Yugur nationality has its own language. In the course of development, the Yugur language is divided into two types: Eastern Yugur and Western Yugur. The native language has been lost. Yugur language currently has a very small population and is an endangered language, and the special attributes displayed in the evolution of the language make Yugur language research of great value. This article will start from the design of the voice database, with the purpose of protecting and preserving the existing Yugur language, taking the western Yugur language voice database design as an example, introduce the design of the western Yugur language voice database, and propose the western Yugur language signals and parameters Set standards. Keywords: Western Yugur language Phonetic acoustic analysis Experimental phonetics Speech database
1 Introduction The Yugur ethnic group is one of the 56 ethnic groups with a small population in China, mainly distributed in the northern foothills of the Qilian Mountains in the middle of the Hexi Corridor in Gansu Province. Language is an important cultural carrier, recording and reflecting the characteristics of national culture. Especially for Yugur language, which has lost characters, the multi-modal recording and preservation of its existing pronunciation is of great significance and value to the preservation of national culture. In addition to the Yugur language, the Yugur people also use Chinese as a communication tool [1]. Due to historical and geographical reasons, the Yugur nationality is an endangered language, the language situation is more complicated, and the population of Yugur language is very small, and there is a difference between the east and the west. The Yugur language in the east belongs to the Mongolian family of the Altaic language family, and Western Yugur language belongs to the Turkic language family of Altaic language family [2]. The establishment of a speech database can play a very important role in language protection. Especially for the endangered language of Western Yugur, which has a small population, it is urgent to establish a phonetic database to preserve the language. The speech database contains two parts: signal and parameters. In addition to acoustic © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 501–508, 2021. https://doi.org/10.1007/978-981-33-4572-0_73
502
S. Lyu et al.
signal parameters, it also includes speech physiological signals and parameters. The establishment of the Western Yugur language phonetic database can provide parameter reference and data support for the theoretical research of Western Yugur language in the research process, and objectively describe the pronunciation phenomenon of Western Yugur language. In the research process, using the acoustic and physiological parameters provided by the database, using experimental phonetics research methods, the acoustic analysis of the voice of Western Yugur language was carried out. The research results can enrich and amend several interpretations and theories of traditional Western Yugur language research. For example, fricative vowels are a special phonetic phenomenon in Western Yugur language, and scholars at home and abroad are still divided on this [3]. Foreign scholars generally classify it as the change of consonants, while domestic scholars believe that its production is related to the change of pronunciation [4]. The acoustic parameters extracted through analysis can provide detailed parameters for speech synthesis, thereby improving the effect of speech synthesis and improving the quality of speech synthesis [5]. Combining the needs of experimental phonetics and computer science for speech database, this article introduces the design method and parameter standards of the Western Yugur language speech database starting from the collection of speech signals, parameter setting, parameter extraction and storage. Through the acoustic and physiological signals of Yugur language in the database, relevant speech parameters can be extracted for language research, and a new method for the inheritance and protection of Yugur language is provided.
2 Speech Signal Collection In order to ensure the quality of signal collection and meet the needs of phonetic analysis, it is necessary to strictly control the speech signal collection process. Commonly used speech signals are divided into two parts: acoustic signals and physiological signals. Adobe audition software is used to collect voice signal, usually dual channel signal is collected. The first channel is voice signal, and the second channel is voice signal. Speech signal is mainly used for signal analysis and parameter extraction. In the process of signal acquisition, there are two modes. The first is based on words, and each word is usually collected twice. The second is sentence unit, which is used for long sentence and discourse analysis. It can be used to collect speech signals such as text and folk songs. Voice signal is the same as voice signal mode, which is used to analyze voice type. Acoustic signals collect speech acoustic physical signals, and physiological signals collect voice, breathing and tongue position signals. Usually used for speech acoustic analysis is mainly speech and voice signals, which can be collected at the same time with dual-channel signals [6]. Therefore, signal acquisition is divided into two parts: speech signal acquisition and voice signal acquisition. The speech acoustic signal is collected with a high-quality microphone and an external sound card, the sampling rate is set to 22050Hz, the dual-
Design of Western Yugur Language Speech Database
503
channel mode is selected, and the file is saved in PCM.wav format. The voice signal is collected by an electronic glottis. This device is a non-invasive device with high signal quality, which is convenient for later analysis and processing. The collection parameter settings are the same as the speech signal.
Fig. 1. Speech acoustic signal
The speech signal collection environment is a professional voice recording studio with good sound insulation and sound absorption effect. In the speech signal collection process, noise interference is strictly avoided, so as to ensure that the quality of the signal meets the requirements of parameter extraction [7]. The acquisition software is recommended to use Adobe Audition software. When only collecting speech signals, use mono mode, and use dual-channel mode when collecting speech and voice at the same time. The speech signal must not exceed the amplitude limit of the acquisition software, as shown in Fig. 1, and ensure the duration of each syllable. Usually 4 speakers are selected. Considering gender, it is recommended to select 2 adult males and 2 females. The speaker is required to be the native language of Western Yugur and live there for a long time, with accurate pronunciation, regular pronunciation, clear pronunciation and normal pronunciation. The pronunciation vocabulary must be established before collection. The selected words should cover all the vowels and consonant phonemes of Western Yugur, and include all syllable combinations. Since there are no words in Western Yugur, the international phonetic alphabet will be used to design the vocabulary, and the corresponding Chinese characters and the serial number of each example will be marked. In the signal acquisition process, it is necessary to introduce the signal acquisition process to the speaker, and perform simple phonetic confirmation according to the international phonetic alphabet of the collected text, so that the speaker is familiar with all the text before the signal collection.
3 Voice Signal Processing and Parameter Setting The voice database contains two parts: signal and parameters, which are set in accordance with a unified standard, which facilitates later retrieval and use.
504
3.1
S. Lyu et al.
Speech Signal Processing
When the speech signal is collected, the signal is collected in the order of the example words in the pronunciation vocabulary, and the signal after the collection is completed needs to be strictly screened [8]. Use Adobe Audition software to segment the signal, and delete the signal with excessive noise and energy beyond the normal range. If the two signals do not meet the standard, the signal is re-collected. The segmentation rules are stored in units of words, and the signals are stored using the Latin transliteration of the International Phonetic Alphabet of Western Yugur words, and are named according to the sequence number of the pronunciation word list. 3.2
Vowel Parameter Setting
When the vowel is pronounced, the sound source signal generated by the vibration of the vocal cords has different resonance characteristics due to the change in the shape of the vocal cavity. The frequency spectrum has undergone great changes, and some parts are strengthened and some are weakened, thus forming different vowel sound. The difference in the timbre of vowels is mainly due to the different frequencies of the formants, which appear in the frequency spectrum as the positions of the formants. The pitch of vowels is mainly related to the frequency of vocal cord vibration, which is reflected in the fundamental frequency parameters of pronunciation. The light and accent of vowels are mainly manifested in the intensity of the sound. The acoustic parameters of Western Yugur vowels are shown in Table 1. The parameters are arranged according to the whole process and abbreviated format, and the parameter units are indicated. Table 1. Vowel acoustic parameters Vowel Vowel Vowel Vowel Vowel Vowel Vowel
3.3
parameters intensity duration pitch first formant second formant third formant
Abbreviation VA VD VP VF1 VF2 VF3
Unit dB ms Hz Hz Hz Hz
Consonant Parameter Setting
Consonants in Western Yugur language are divided into two categories: unvoiced and voiced according to whether the vocal cords vibrate. The consonant parameter settings are also classified according to unvoiced and voiced consonants. The parameters of unvoiced consonants include tone intensity, duration and VOT parameters. Voiced consonants also need to extract voiced consonant formants. The specific parameter settings are shown in Table 2.
Design of Western Yugur Language Speech Database
505
Consonants and vowels are collected at the same time in the process of signal acquisition. Before parameter extraction, consonants should be segmented first. Because consonants can not be pronounced alone, it is necessary to pay attention to the selection of words that are not too complex. The syllables with single consonants and monosyllabic syllables are usually selected. Before extracting the parameters, the consonants need to be analyzed are cut out, and then the parameters are extracted. And according to the different positions of consonants, the parameters are classified for later analysis. Table 2. Consonants acoustic parameters Consonant parameters Consonant intensity Consonant duration Voice onset time Voiced consonants first formant Voiced consonants second formant Voiced consonants third formant
3.4
Abbreviation CA CD VOT CVF1 CVF2 CVF3
Unit dB ms ms Hz Hz Hz
Voice Parameter Settings
The voice is collected by a laryngoscope, mainly collecting glottal impedance signals, which can reflect the opening and closing of the vocal cords. Voice parameters mainly extract vowels and voiced consonants. Commonly used voice parameters mainly include voice fundamental frequency, open quotient, speed quotient and fundamental frequency range. Among them, open quotient and speed quotient have no unit and are a percentage of degree. The specific parameters are shown in Table 3.
Table 3. Voice acoustic parameters Voice parameters Voice fundamental frequency Open quotient Speed quotient Fundamental frequency range
Abbreviation VP OQ SQ PR
Unit Hz % % Hz
Voice is collected by glottic signal acquisition equipment. In the acquisition process, the glottic signal acquisition equipment needs to be close to the larynx and keep still during the pronunciation process. Voice acquisition and speech are carried out at the same time, and the acquisition mode and syllable type are consistent with voice. In parameter extraction, only voiced voice is extracted, mainly vowels and voiced consonants. The opening and closing of vocal cords are reflected in the voice signal.
506
S. Lyu et al.
Besides Praat software, kay5138 voice analysis software can be used to extract voice parameters, and self-developed program can also be used to extract voice parameters according to parameter types.
4 Speech Parameter Extraction The extraction of speech parameters, the same installation parameter settings, is divided into two parts: speech acoustic parameters and voice parameters. Speech parameters include vowel parameters and consonant parameters, and voice parameters include vowel voice parameters and voiced consonant voice parameters. The parameters are extracted using Praat software, referring to phonetics knowledge and research experience, segmenting the vowels and consonants, and determining the part of the speech that needs to be extracted. The vowel parameters mainly extract the vowel intensity, vowel duration and formant parameters. Among them, the formant is the main feature of timbre in the study of vowel acoustics. Usually, the vowel extracts 5 formant parameters. The first formant and the second formant play a major role in the timbre. The third formant and the lip roundness related. As there is no clear statement about the fourth and fifth formants, it has been proved from synthetic experiments that they contribute to the change of timbre. The consonant parameters mainly extract VOT parameters, consonant duration, consonant strength and voiced consonant formant parameters. Among them, the voice onset time is the starting time of the voice or pronunciation, the time elapsed from the moment when the consonant is removed to the vibration of the vocal cords. You can use VOT to determine whether the consonant is unvoiced or voiced. For example, the VOT value of voiced stop is less than 0, and the VOT is negative, indicating that the vocal cords begin to vibrate before the resistance is removed. Among voiced formants, nasal formants are special. In the process of extracting parameters, it is necessary to control the collection of formant parameters according to the pronunciation of nasal. Voice parameters are mainly used to study the sound source signal, which can reflect the vibration of the vocal cords. The sound type of different voices has a direct impact on some special pronunciation patterns. The extraction of voice parameters can use Praat software to segment the signal, and then use the analysis script to extract the parameters. There are parameters such as fundamental frequency, open quotient, speed quotient and range of fundamental frequency. According to the needs of research, the maximum, minimum, and average values of the parameters can be calculated, and the voice parameters can be calculated preliminary, and the distribution range can be calculated, and saved in the final parameter table.
Design of Western Yugur Language Speech Database
507
5 Save Signals and Parameters The voice signal is saved according to the original signal after the audit, and the main purpose is to analyze the voice signal in the later stage. The signal analysis is mainly voice acoustic analysis [9]. The first is the synoptic analysis, which analyzes the voiced horizontal bars, straight bars, chaotic lines and the previous blocked segments. The second is the analysis of the length of the blocked section, which analyzes the length of the blocked section, compares it with the air pressure at the time of blocking, and studies the relationship between the pronunciation part and the blocked time. The last is formant analysis, which studies the formant transition between the following vowels and compares the starting position of the formant with the average value of the vowel formants to study the speech waveform, formant, energy, and fundamental frequency. After the parameters are extracted, all parameters need to be built and saved in different types. Sort and save according to different parameter types [10]. The main parameters are divided into two categories, namely the first formant data, the second formant data, the vowel duration parameter, and the consonant duration parameter. The voice acoustic parameters also have parameters such as energy and voice onset time (VOT). Then, according to the different types of parameters, save all the parameters in the database table. It is recommended to use the EXCEL table to save it for later parameter statistical analysis and speech acoustic analysis research.
6 Conclusions The Western Yugur language speech database has an important reference role for speech signal analysis and acoustic parameter analysis. From signal collection to parameter setting and storage, this article takes Western Yugur language as an example to provide a new method for the inheritance and protection of Yugur language. The completion of the database has reference value for the inheritance and protection of other endangered languages in the world. The speech acoustic database can be used by language learners. Through the form of acoustic signals and parameters, it makes up for the shortcomings of non-text language learning in Western Yugur. Researchers can use the methods mentioned in the article to establish a unified standard voice database, so as to realize the sharing of language resources and provide data and parameters for language research. Acknowledgements. This research was financially supported by Northwest MinZu University 2019 annual basic scientific research funds of the central university funding projects (Grant NO. 31920190111). Also are the national social science fund project stage results: The experimental phonetics research project of Yugur traditional folk songs inheritance and protection.
508
S. Lyu et al.
References 1. Chen, Z.: Research on Yugur Language in Western China. China National Photography Art Publishing House, Beijing (2004) 2. Chen, Z.: Yugur people and their language. J. Xinjiang Univ. (Phil. Soc. Sci. Ed.) (Z1), 72– 82 (1977) 3. Zhong, J.: The historical status and current use of Yugur language in the west of China. J. Northwest Univ. Nationalities (Phil. Soc. Sci. Ed. Chin.) (2000) 4. Zhong, J.: A study of Yugur language in China and Western China in the 20th century. Lang. Translation (03), 6–12 (2000) 5. Shiliang, L., Luxin, Z.: The application of acoustic analysis in the study of Yugur traditional folk songs. In: International Conference on Social Network (2017) 6. Pu, Y., Yang, J., Wei, H., et al.: A voice recognition-oriented mandarin database of Yunnan ethnic accents. Comput. Eng. (17), 87–89 (2003) 7. Feng, Y.: Data model and implementation technology of speech database. Comput. Res. Dev. 030(010), 9–15 (1993) 8. Gao, Y., Gu, M., Sun, P., et al.: Design of multi-purpose Chinese dialect phonetic database. Comput. Eng. Appl. 48(5), 118–120 (2012) 9. Kanda, N., Sumiyoshi, T., Obuchi, Y.: Search system and search method for speech database. US (2009) 10. Haris, B., Pradhan, G., Misra, A., et al.: Multi-variability speech database for robust speaker recognition. In: 2011 National Conference on Communications (NCC). IEEE (2011)
Design of Intelligent Three-Dimensional Bicycle Garage Based on Internet of Things Chunyu Mao(&), Chao Zhang, Yangang Wang, and Yingxin Yu Jilin Engineering Normal University, Changchun, Jilin, China [email protected]
Abstract. In order to solve the problems of insufficient bicycle parking space, random parking and low safety factor, this paper designs an intelligent threedimensional bicycle garage based on the Internet of Things. The bicycle garage adopts a revolving three-dimensional structure, and the motor drives the gear chain structure to realize the three-dimensional parking function of the bicycle. The control system adopts STM32F103 single-chip microcomputer, and realizes vehicle detection and frame position positioning through photoelectric sensors. The intelligent garage adopts ZigBee wireless communication module to realize the data transmission between a single garage and the system platform. The test structure shows that the intelligent three-dimensional bicycle garage is fast, convenient, and reliable, and has great commercial value. Keywords: Internet of Things Zigbee
Whirlpool STM32F103 microcontroller
1 Introduction With traffic congestion, serious pollution, and energy consumption threatening industrialized countries and the developing world at the same time, reducing vehicle gas emissions has become one of the themes of environmental protection in today’s society. For this reason, countries vigorously advocate the use of bicycles as a means of transportation tool. In our country, bicycles are one of the important means of transportation in people’s lives. The increase in bicycle usage has brought about a series of problems in urban parking, such as small parking space, easy damage to bicycles, low parking safety factor, and random parking, which affects the appearance of the city; poor management of parking spots Strict, the frequency of theft is high. At present, many large and medium-sized cities already have a public bicycle rental industry, which is known as shared bicycles. However, the problem of insufficient access to bicycles and small station capacity has become a common phenomenon. In addition, most bicycles are parked in the open air and accelerated the report rate of bicycle damage. Therefore, if you want to popularize bicycles, you must first solve the problem of bicycle parking, which not only requires enough parking space, but also ensures the safety and convenience of parking [1, 2].
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 509–516, 2021. https://doi.org/10.1007/978-981-33-4572-0_74
510
C. Mao et al.
The revolving three-dimensional bicycle garage designed in this paper can make good use of the space and solve the problem of difficult bicycle parking caused by too many vehicles in many big cities.
2 The Composition of Intelligent Three-Dimensional Garage The three-dimensional bicycle garage designed in this paper adopts a revolving structure, and the physical map (15:1 zoom) is shown in Fig. 1. The garage includes a garage support structure, a bicycle frame structure, a rack and pinion combined structure, a connection structure between a motor and a gear, a support bearing system, and an electronic control system [3].
Fig. 1. Physical map of three-dimensional bicycle garage model
2.1
Mechanical Structure
2.1.1 Garage Support Frame The garage support frame mainly realizes the supporting function of the whole threedimensional garage, and is directly connected with the gear structure. 2.1.2 Frame The frame is composed of 4 parts, which are the trailer board, the support frame, the wheel slot and the support column, as shown in Fig. 2. The trailer board and the support frame are fixed by nuts, the wheel slot is fixed on the trailer board, the support column is connected with the support frame, and the support column is connected with the gear structure through a matching piece (as shown in Fig. 3) to realize the frame and the garage Connection. After the bicycle enters the garage, put the front and rear wheels of the bicycle into the wheel slot to complete the bicycle parking.
Design of Intelligent Three-Dimensional Bicycle Garage
Fig. 2. Physical image of the frame
511
Fig. 3. Matching parts between frame and chain
2.1.3 Gear Rack Combined Rotating Mechanism The rack and pinion combination structure is shown in Fig. 4. The racks are connected by nuts, and a plurality of racks are connected to form a rack chain. The rack chain cooperates with the gear to complete the mechanism that can rotate. The gear and the motor are connected through a matching part, and the motor drives the gear matching part, transfers force to the gear, and drives the rack chain and the frame to run simultaneously. The other side of the gear matching piece is connected with the gear drive hexagonal column, and the force is transmitted to the gear transmission mechanism on the other side to realize the overall operation. The gear is fixed on the garage support frame.
Fig. 4. Physical picture of rack and pinion combined structure
2.1.4 Support Bearing System The supporting bearing system includes a bearing 6005R5 supporting the motor shaft and a bearing 628ZZ supporting the frame.
512
2.2
C. Mao et al.
Electric Control System
2.2.1 Control Chip The control chip is the core of the entire control system, which mainly completes signal collection, data processing and transportation, and task information output. The arm single-chip microcomputer adopts a new 32-bit ARM core processor, which surpasses the traditional 51 series single-chip microcomputer in terms of instruction system, bus structure, debugging technology, power consumption and cost performance. At the same time, the arm single-chip microcomputer integrates a large number of On-chip peripherals, so functions and reliability are greatly improved. This article chooses the arm single-chip microcomputer of model STM32F103 as the main control chip to complete the monitoring and operation of the bicycle garage [4]. 2.2.2 Vehicle Detection and Frame Position Positioning Sensor Photoelectric sensors are widely used in money counters, limit switches, counters, motor speed measurement, printers, copiers, liquid level switches, financial equipment, entertainment equipment (automatic mahjong machines), and stage Light control, monitoring pan-tilt control, motion direction discrimination, counting, electric winding machine counting, electric energy meter revolution measurement. The circuit principle of vehicle detection and frame position positioning designed in this paper is shown in Fig. 5.
Fig. 5. Schematic diagram of photoelectric sensor circuit
The photoelectric sensor realizes the control by converting the change of light intensity into the change of electric signal. In general, the photoelectric sensor has three parts, which are divided into: transmitter, receiver and detection circuit. The transmitter is aimed at the target to emit a light beam, and the emitted light beam generally comes from a semiconductor light source, a light emitting diode (LED), a laser diode, and an infrared emitting diode. The beam is emitted continuously, or the pulse width is changed. The receiver is composed of photodiode, phototransistor, and photocell. In front of the receiver, optical components such as lens and aperture are installed. Behind it is the detection circuit, which can filter out the effective signal and apply the signal. In addition, there are emission plates and optical fibers in the structural elements of the
Design of Intelligent Three-Dimensional Bicycle Garage
513
photoelectric switch. The triangular reflector is a strong launch device. It is composed of a small triangular pyramid reflecting material, which can make the light beam accurately return from the reflector, which has practical significance. It can change the emission angle from 0 to 25 relative to the optical axis, so that the light beam is almost from an emission line, and after reflection, it still returns from this reflection line [5]. 2.2.3 Electromechanical Transmission Components The power components in this article mainly drive the gears to rotate, so the motor is selected as the power component, and the DC motor is convenient for control and has good performance. Therefore, the DC motor is selected as the power source. Its working voltage is 36 V and the speed It is 80 r/min and the torque is 5 N m [6]. 2.2.4 Input and Output Components The input device adopts independent buttons, with 4 buttons, namely, car retrieval button, car storage button, and 2 system function buttons, which mainly deal with temporary faults. For example, problems such as vehicle data errors caused by improperly placed vehicles. The output device adopts a color LCD screen with serial communication to complete relevant information such as vehicle data display, time display and operation failure. 2.2.5 Data Transmission Coefficient Data transmission mainly completes the task of uploading data to the local bicycle garage. Each garage is an intelligent terminal. The intelligent terminal needs to transmit the data of the terminal to the main control platform on time to realize data intercommunication [7]. Data transmission can be simply divided into wired (including erecting optical cables, cables or leased telecommunications dedicated lines) and wireless (divided into establishing dedicated wireless data transmission systems (433 MHz frequency band and 2.4G frequency band) or borrowing CDPD, GSM, CDMA and other public network information platforms) Two types of methods. After the user has established a communication network, new equipment is often added due to the needs of the system. If you use a wired method, you need to re-wiring, the construction is more troublesome, and it may destroy the original communication line, but if you use a wireless data transmission station to establish a dedicated wireless data transmission method, you only need to connect the new equipment with the wireless data transmission station. The expansion of the system can be realized by connecting, which has better scalability in comparison. Therefore, this article chooses the Internet of Things wireless communication method for data transmission [8]. Zigbee: It is a local area network communication protocol based on IEEE802.15.4 standard with low speed, short distance, low power consumption and two-way wireless communication technology, also known as Zigbee protocol. The characteristics are close range, low complexity, self-organization (self-configuration, self-repair, selfmanagement), low power consumption, and low data rate. The ZigBee protocol from bottom to top is the physical layer (PHY), media access control layer (MAC), transport layer (TL), network layer (NWK), application layer (APL), etc. The physical layer and media access control layer follow the provisions of the IEEE 802.15.4 standard are
514
C. Mao et al.
mainly used for sensor and control applications. It can work on three frequency bands of 2.4 GHz (popular in the world), 868 MHz (popular in Europe) and 915 MHz (popular in the United States), with the highest transmission rates of 250 kbit/s, 20 kbit/s and 40 kbit/s respectively, and the single point transmission distance is Within the range of 10–75 m, ZigBee is a wireless data transmission network platform consisting of one to 65535 wireless data transmission modules. In the entire network range, each ZigBee network data transmission module can communicate with each other, from the standard 75 m the distance extends infinitely. ZigBee nodes are very power-efficient, and their battery working time can be as long as 6 months to 2 years, and up to 10 years in sleep mode [9, 10]. The three-dimensional garage’s networking mode adopts the star networking mode, as shown in Fig. 6.
Fig. 6. Smart garage IoT networking structure
3 Program Design and System Debugging 3.1
System Programming
There are two control modes for the rotary stereo garage, one is the ordinary parking lot mode, and the other is the shared bicycle mode. In the ordinary parking lot mode, each parking space will be numbered, and the user can select the corresponding parking space through the APP software to realize the pickup and release of the car. In the shared bicycle mode, it is divided into two control processes of picking up and releasing the bike, and automatically picking up and releasing the bike through buttons and APP software. In the process of putting the car, we must pay attention that the bicycle must be placed in the wheel slot, otherwise an alarm will appear. See flow chart Fig. 7 for the specific process. 3.2
System Debugging
Through the debugging and operation of the transmission system, sensor system and control system, the 6 parking spaces in the model are tested 10 times, and the data obtained by averaging are shown in Table 1. It can be seen from the table that the reliability of the system is very high, the accuracy rate has reached 100%, the longest time for parking and picking up the car is 11.2 and 10.3 s respectively, and the distance between the car and the parking space is the farthest from the bottom. It takes less time to pick up a car than parking.
Design of Intelligent Three-Dimensional Bicycle Garage
515
Fig. 7. Program flow chart
Table 1. Smart stereo garage test data Function The shortest time (s) The longest time (s) Data update correct rate (%) Parking 3.3 11.2 100 Pick up the car 2.8 10.3 100
4 Conclusions The revolving intelligent three-dimensional garage is a parking system that circulates in a vertical direction. The transmission mechanism is driven by a reducer. A storage rack is installed on the traction component chain at regular intervals. When the motor is started, the storage rack is along with the chain. Make a vertical circulation movement. The intelligent system of the gyroscopic three-dimensional bicycle garage uses STM32F103 single-chip microcomputer as the main control chip and DC motor as the main power source. The control system realizes vehicle detection and frame position positioning through photoelectric sensors. Smart garage adopts ZigBee wireless communication module to realize data transmission between a single garage and the system platform. The Swivel Intelligent Three-dimensional Garage is an integrated, nonstandard electromechanical product that integrates machinery and electronics. It is different from the traditional standardized electromechanical products. The size of the garage can be determined by itself according to the number of bicycles and the style of surrounding houses, which maximizes the utilization of land and space. The revolving intelligent three-dimensional bicycle garage can reliably and quickly carry out the storage and retrieval of bicycles, ensuring the safety of bicycle parking, and has a large space for commercial development.
516
C. Mao et al.
Acknowledgements. This work was supported by Jilin Province Science and Technology Development Plan Item (No. 20200401110GX and (No. 20190302045GX), Program for Innovative Research Team of Jilin Engineering Normal University.
References 1. Yuhua: Design Guo of multi-layer mechanical three-dimensional parking garage. Mechatron. Eng. Technol. 49(08), 240–242 (2020) 2. Fei, Y., Yang, W., Wang, L.: Study on the structure and development of the threedimensional garage. South. Agric. Mach. 51(13), 92–93 (2020) 3. Viktor, M.: Three-dimensional garage door comprises quarter circle shell rotating about horizontal axis close to ground and enclosing quarter circle cylinder space with two closing positions (2002) 4. Wang, L., He, K., Wang, X., Cao, M., Mao, Y.: Research on the innovative design of cantilever double rotary intelligent three-dimensional garage. China Equip. Eng. (08), 20–22 (2020) 5. Zou, J., Xie, Q., Shi, H., Ding, S.: Study on automatic control technology of stereo garage based on STM32. J. Nanjing Inst. Technol. (Nat. Sci. Ed.) 18(02), 55–58 (2020) 6. Park, C.M., Ih, J.G., Nakayama, Y., et al.: Inverse estimation of the acoustic impedance of a porous woven hose from measured transmission coefficients. J. Acoust. Soc. Am. 113(1), 128–138 (2003) 7. Ayob, M.A., Zakaria, M.F.: 3WD omni-wheeled mobile robot using ARM processor for line following application. In: IEEE Symposium on Industrial Electronics and Applications, pp. 410–414 (2011) 8. Atzori, L., Iera, A., Morabito, G., et al.: The Internet of Things: a survey. Comput. Netw. 54(15), 2787–2805 (2010) 9. Bobadilla, J., Ortega, F., Hernando, A., et al.: Recommender systems survey. Knowl. Based Syst. 46, 109–132 (2013) 10. Bonomi, F., Milito, R.A., Natarajan, P., et al.: Fog computing: a platform for Internet of Things and analytics. In: The Internet of Things, pp. 169–186 (2014)
Sealing Detection Technology of Cotton Ball of Edible Fungus Bag Xiaodong Yang1, Chunyu Mao1(&), Zhuojuan Yang1, and Hao Liu2 1
2
Jilin Engineering Normal University, Changchun, Jilin, China [email protected] Jilin City Jilong Technology Development Co., Ltd., Changchun, Jilin, China
Abstract. In the production process of edible fungi, the quality inspection of each link is a very important link, and the inspection link determines the reliability and intelligence of intelligent equipment. The precision sealing technology of the cotton ball is the last step of the bacteria bag injection, and the quality of the sealing directly affects the bacteria release rate of the bacteria bag. To this end, this paper designs a diagnostic system for the sealing quality of edible fungus bag cotton balls based on image processing technology. In this system, image enhancement, region segmentation and noise filtering are used to preprocess the picture; the combination of convolutional neural network technology and image recognition technology detects the sealing quality of the cotton ball of edible fungus bag, and convolution the neural network uses a three-layer convolution structure. Through a large number of sample training, the accuracy of the identification of the quality of the bacterial envelope is as high as 99.8%. Keywords: Edible fungus Bag Image processing technology Convolutional neural network technology
1 Introduction With the rapid development of computer technology, how to analyze and process data in the era of big data has become a research focus. As a new subject that combines computer technology and mathematical algorithms, machine learning involves many subjects such as statistics, probability theory, and algorithms. Its core is to study computer simulations of human learning behavior, and use computer hardware and software programs to continuously improve Own performance. Deep learning is an important research direction in the field of machine learning, and it is also the basis for the realization of artificial intelligence. Deep learning, through the summary and training of samples, enables computers to have the ability to learn and analyze like humans, and achieves good results when processing text, images, sound and other data. At present, among the deep learning algorithms, the convolutional neural network algorithm is a widely used one. By simulating the learning process of the human brain, the hidden layer function of multiple networks is constructed and the data feature extraction process is automatically completed. When processing image data, the convolutional neural network algorithm can extract features from the image layer by layer; © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 517–524, 2021. https://doi.org/10.1007/978-981-33-4572-0_75
518
X. Yang et al.
this not only saves the time of extracting image features, but also has high feature extraction accuracy [1–3]. China has identified the edible fungus industry as a sunrise industry in the 21st century. Due to the low efficiency and high failure rate of the current production equipment for inoculating bacteria and the working environment is likely to cause damage to the respiratory tract, skin, and reproductive system of the practitioners, it is urgent to solve the technical bottleneck in the development of the edible fungus industry and develop efficient and intelligent Special equipment. In the production process of edible fungi, the quality inspection of each link is a very important link, and the inspection link determines the reliability and intelligence of intelligent equipment. The precision sealing technology of the cotton ball is the last step of bacterial bag injection. The quality of the sealing directly affects the bacteria generation rate of the bacterial bag. Therefore, the quality inspection of the bacterial bag sealing is an important link. This paper studies an image recognition technology based on image recognition and convolutional neural network algorithm for bacterial bag cotton ball sealing, focusing on the image processing scheme of convolutional neural network and the process of bacterial bag sealing image recognition.
2 Bacterial Bag Seal Detection Technology Based on Convolutional Neural Network and Image Recognition 2.1
Image Recognition Principle of Convolutional Neural Network
The convolutional neural network has a convolutional structure, which reduces the complexity of the network algorithm by designing weight sharing, and makes the network more similar to the biological neural network. The input of the convolutional neural network is usually a two-dimensional image, and the hidden layer in the convolutional network continuously performs processes such as down sampling and feature extraction in the image processing process. The principle of image processing based on convolutional neural network is shown in Fig. 1. It can be seen that image processing based on convolutional neural networks requires continuous convolution and down-sampling of the image, so that the features of the two-dimensional image can be expressed on different pixel feature maps by convolution, and finally the feature extraction of the pixels is completed. The image processing flow based on the convolutional neural network algorithm includes [4, 5]. (1) Forward propagation of the convolutional layer The forward propagation of the convolutional layer is mainly the feature map extraction process as: ðlÞ
Zj ¼
Nl X i
ðl1Þ
xi
ðlÞ
ðlÞ
wij þ yj
(2) Forward propagation of the down sampling layer
ð1Þ
Sealing Detection Technology of Cotton Ball of Edible Fungus Bag
519
Fig. 1. Image processing principle based on convolutional neural network
The forward propagation of the down sampling layer produces the result of down sampling the input feature map as: ðl þ 1Þ
xi
ðlÞ
¼ downðxi Þ
ð2Þ
(3) Back propagation of the down-sampling layer ðlÞ
ðl þ 1Þ
dj ¼ upðdj
ðlÞ
Þf ðZj Þ
ð3Þ
(4) Back propagation of the convolutional layer
ðl1Þ
di
2.2
mn
¼
@J ðW; y; a; bÞ ðl1Þ @ Zi mn
ð4Þ
The Testing Process of Edible Fungus Bag Sealing
This paper combines the convolutional neural network algorithm to design an automatic detection system for the image of qualified products of the bacterial bag sealing. The system is mainly divided into two modules, namely the photo data management module and the qualified product image automatic detection module. Among them, the photo data management the main function of the module is to store the seal image of the bacteria bag and query the result of image recognition. The main functions of the qualified product automatic detection module include image preprocessing and feature extraction. Figure 2 shows the composition of the automatic detection system for qualified product images based on convolutional neural network. In the automatic detection of edible fungus bag seal, image enhancement, region segmentation, noise filtering and feature extraction of the image at the cotton ball seal are four key steps. (1) Image enhancement The purpose of image enhancement is to improve the quality of the image of the ship’s welds, and make subsequent image processing and feature extraction more
520
X. Yang et al.
Fig. 2. Flow chart of edible fungus bag sealing detection
efficient. Commonly used image enhancement methods include gray scale transformation method, fuzzy algorithm and so on. During the image enhancement process, the gap between the cotton ball and the bacteria bag is more different from the background, which is conducive to the rapid identification of the quality of the cotton ball seal. (2) Regional segmentation The main purpose of region segmentation is to separate the image containing the sealed cotton ball from the original image, thereby reducing the workload of subsequent image processing and improving efficiency. The method of area segmentation is to keep the original gray value of the pixels near the sealed cotton ball, and set the gray value to zero at the image position relatively far from the sealed cotton ball. (3) Noise filtering There may be some noisy pixels in the bacterial bag image due to the light, weather and shooting angle. Because the border of the cotton ball in the bacterial bag image is high in brightness and similar to the noise pixel, it is necessary to use different noise filtering algorithms to remove Noise in the image. This article uses Gaussian filtering to remove Gaussian noise in the image. (4) Feature extraction The defect detection of the image of the cotton ball seal is mainly based on the length, width, equivalent area, gray level and other characteristics of the gap at the seal, and extracts the information of the gap image at the seal. This paper combines the convolutional neural network algorithm to realize the cotton ball seal Fast feature detection of gaps. According to the characteristics of the convolutional neural network, different network layers are designed for the gap recognition model at the cotton ball seal. The system will affect the overall model training process and final performance by setting the number of model layers and parameters. The model should be based on satisfying practical applicability, using three convolutional layer module structures, followed by several fully connected layers. Different modules are added to each convolutional layer to deepen the network depth inside the convolutional layer, so that more global information can be extracted. The schematic diagram of the quality diagnosis model of bacterial envelope sealing based on convolutional neural network is shown in Fig. 1. The model contains 3 convolutional layers and 2 circle connection layers. The features of the local information convolutional layer are mapped to the fully connected layer.
Sealing Detection Technology of Cotton Ball of Edible Fungus Bag
521
Under the condition that the feature map is unchanged, the middle part of the network can process any size input of the overall network [6, 7]. The schematic diagram of the first convolutional layer is shown in Fig. 3. In the first convolution layer, the size of the extracted image is 224 * 224 (pixels), the number is 3, the size of the convolution kernel is 3 * 3 (pixels), and the number of convolution kernels is 32. Control the parameters of the first layer of the convolutional layer, use the Re LU activation function, and after the first layer is processed, a feature map with a size of 112 * 112 (pixels) * 32 for the sealed image is obtained.
Fig. 3. Schematic diagram of model network C1 layer structure
After the first convolutional layer is processed, the second convolutional layer is processed, and its structure is shown in Fig. 4 [8, 9].
Fig. 4. Schematic diagram of model network C2 layer structure
The operation step of the second convolutional layer is 2, and the lower sampling layer is not used. The size of the convolution kernel is 3 * 3, the number of convolution kernels is 64, and the feature map size is reduced to 56 * 56 (pixels). Using the Re LU activation function, the final processed feature image size is 56 * 56 (pixels) * 64 detailed feature images. The operation of the third convolutional layer in the model is the same as that of the second convolutional layer, but the number of convolutional layer cores has increased to 128. After processing by each convolutional layer, different levels of feature
522
X. Yang et al.
extraction are formed to form a cotton ball based on convolutional neural network Gap model at the seal [10].
3 Experiment 3.1
Experiment Preparation
Design a simulation experiment to compare the accuracy of the identification results of the qualified product identification technology based on the convolutional neural network under the same experimental environment. In the experiment, 30 bacteria baskets were selected, and 12 sealed bacteria bags were placed in each bacteria basket. The arrangement was 3 * 4, and the distance between each bacteria bag was basically the same, as shown in Fig. 5. In order to easily distinguish the influence of different positions of the bacteria bag on the detection accuracy, the bacteria basket is divided into 3 position areas, as shown in Fig. 5. Place the sealed qualified and unqualified bacterial bags in different positions of the bacteria basket for testing. The test environment is darker (10–50 lx) and brighter (400–500 lx). Perform multiple tests under different positions and lighting conditions, take the average of the results, and calculate the correct rate of the test results.
Fig. 5. Location map of bacteria bag
3.2
Analysis of Experimental Results
It can be seen from Table 1 and Table 2 that there are certain differences in the detection technology of the bacterial envelope sealing quality based on image processing technology under different lighting conditions, and the accuracy of detecting different bacterial bag positions is also different. When the light intensity is in the range of 400–500 lx, the correct rate of product detection is higher than the correct rate of light intensity in the range of 10-50 lx, which illustrates the importance of light intensity. At the same time, the accuracy of the bacteria bag at position 3 of the bacteria basket is the highest, which is mainly due to the large area of the bacteria basket and the focus of the camera at the center position.
Sealing Detection Technology of Cotton Ball of Edible Fungus Bag
523
Table 1. The correct rate of image recognition when the light intensity is 10–50 lx Light intensity (10–50 lx) Position 1 Position 2 Position 3 Correct rate (%) 95.6 97.3 99.2
Table 2. The correct rate of image recognition when the light intensity is 400–500 lx Light intensity (400–500 lx) Position 1 Position 2 Position 3 Correct rate (%) 98.6 99.3 99.8
4 Conclusions A diagnostic system for the sealing quality of edible fungus bag cotton balls based on convolutional neural network and image processing technology is designed. In this system, image acquisition technology and image processing technology are mainly considered. In the process of image acquisition, pre-processing operations such as image enhancement, region segmentation, and noise filtering were performed on the image, which laid a good foundation for further image feature vector extraction. This design uses the image recognition technology of convolutional neural network to detect the sealing quality of the edible fungus bag cotton ball. The convolutional neural network uses a three-layer convolution structure. Through a large number of sample training, the correct rate of the quality of the bacteria bag sealing is recognized. As high as 99.8%. Acknowledgements. This work was supported by Jilin Province Science and Technology Development Plan Item (No. 20200401110GX).
References 1. Krizhevsky, A., Sutskever, I., Hinton, G.E., et al.: ImageNet classification with deep convolutional neural networks. In: Neural Information Processing Systems, pp. 1097–1105 (2012) 2. He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: Computer Vision and Pattern Recognition, pp. 770–778 (2016) 3. Krizhevsky, A., Sutskever, I., Hinton, G.E., et al.: ImageNet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017) 4. Yamada, A.: Image processing apparatus. Ind. Robot Int. J. 31(2) (2004) 5. Satapathy, S.C., Raja, N.S., Rajinikanth, V., et al.: Multi-level image thresholding using Otsu and chaotic bat algorithm. Neural Comput. Appl. 29(12), 1285–1307 (2018) 6. Karpathy, A., Toderici, G., Shetty, S., et al.: Large-scale video classification with convolutional neural networks. In: Computer Vision and Pattern Recognition, pp. 1725– 1732 (2014) 7. Ye, Y.: A vehicle detection method based on convolutional neural network. Agric. Equip. Veh. Eng. 57(2), 44–48 (2019)
524
X. Yang et al.
8. Wu, J., Qian, X.: Application of compact deep convolutional neural network in image recognition. Comput. Sci. Explor. 13(2), 275–284 (2019) 9. Lin, L., Wang, S., Tang, Z.: Infrared oversampling scanning based on deep convolutional neural network image point target detection method. J. Infrared Millim. Waves 37(2), 219– 226 (2018) 10. Esteva, A., Kuprel, B., Novoa, R.A., et al.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115–118 (2017)
Reconstruction and Reproduction: The Construction of Historical Literature Model Under Data Intelligence Wenping Li(&) College of History and Culture, Northwest Minzu University, Lanzhou, China [email protected]
Abstract. Through the analysis of literature and literature communication mode, the construction and use of database under the background of big data, the meaning of data intelligence and the construction of historical literature model, this paper expounds that big data and the data intelligence developed on the basis of it have a great promotion effect on the historical research work. Using literature research method, concept analysis method, comparative research method and so on, this paper probes into the way to construct the historical document model under the data intelligent visual fault. Data intelligence emergence enables a historical document to break away from its own single individual, to break the time and space constraints, to connect with similar or similar documents, and then to integrate multiple and dispersed textual information. In particular, data intelligence, through the ability of data acquisition and calculation, can actively screen and compare abstract information in heterogeneous data according to internal logic, can reconstruct the historical original appearance, and can make it be a widely used knowledge, and then can provide research hypothesis and prediction model for research work. This artificial intelligence technology which highlights the subjectivity of literature is marked by deep learning, and further deepens the study of big data. Keywords: Historical documents
Big data Data intelligence Role
1 Introduction Literature is the product of the historical process. Its preservation records the imprint of the times in different ways, and constructs a complete sequence of time and space for the historical development of human society. The traditional historical research paradigm is based on the information extraction of historical documents. Therefore, as an important research object, documents provide numerous objective and subjective research materials for historical research. These materials are not only the re-cognition of historical documents themselves, but also the extraction and construction of information carrier content—the research on “the history of literature”, and the research on “the history of literature communication” which includes the text and the specific communication mode and communication way in its era. Together, they constitute the basis of the study of historical documents, become an indispensable source of information for restoring the original appearance of history, become a key to reconstruct the © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 525–531, 2021. https://doi.org/10.1007/978-981-33-4572-0_76
526
W. Li
historical fragments, and excavate the historical truth and the operation rules of history. Since modern times, with the excavation and publication of a large number of documents, how to better obtain the core information we need from the massive historical documents has gradually become the focus of scholars’ research. Trying to find in the existing literature that corresponds to the answer to solve the problem, traditional habits of historical research workers to “problem driving” into the research work, but when facing with a large number of literature, human themselves often cannot do full and accurate retrieval, moreover this conclusion oriented looking for ways to literature example, again not consciously ignored the literature information hiding inside connection and logic, make the inevitable historical research into the trouble of Sisyphus type. In this regard, with the continuous development of information technology, many scholars have made many beneficial attempts to introduce big data into the scope of historical literature research, in an attempt to change the traditional research paradigm with the help of new communication methods, to accelerate the transmission and integration of literature, and to improve the retrieval efficiency. Through the establishment of literature database and quantitative database, large-scale integration storage, extraction and data extraction and analysis of literature are realized. The first with beam Liang Renzhi “big data: as a basic method for historical research [1], Zhang Nan” big data technology in the application of the silk road history research [2], the latter to Xiong Jinwu the quantitative history: a new paradigm of economic history [3], Chen Zhengping era of big data and measurement of economic history research [4]. However, both of them have some shortcomings. For example, the literature database only converts traditional data into electronic data storage, but there is no new breakthrough in the use method. Although the quantitative database achieves the rule of data presentation to a certain extent, there is still a big gap between the comprehensive integration of data and the intelligent use of data. Quantitative History: A New Paradigm of Economic History. The emergence of data intelligence, especially the emergence of big data retrieval and analysis models with certain intelligent characteristics, has greatly changed the way of historical research. The similarities and differences and connections between different literature data become clear. Researchers are no longer limited to text retrieval, but think about how to make a literature break away from its own single individual through depth algorithm, so that it can be connected with similar or similar literature and become a research whole. In this way, it calculates and excavates the correlation and logic of the historical development behind it, so that the humanities and natural sciences have a new intersection, and the information presented by the data itself becomes the starting point of the research, so as to “discover knowledge by relying on the database”, so as to get rid of the constraints of the existing research ideas.
2 Literature and Literature Dissemination Since the birth of literature, it has established an inseparable relationship with literature communication. The emergence of literature is bound to cause the occurrence of literature communication activities, and the change of literature communication mode
Reconstruction and Reproduction: The Construction of Historical Literature Model
527
promotes the reform of literature research mode. The emergence of literature and the development of literature communication mode have profoundly changed the course of human social history, enabling information to break the restriction of time and space and circulate freely, thus promoting the overall development of society. For example: To offer the ways of sorting and recording to look at the spread of different period literature history, we can be roughly as the following. Firstly it is the engraved document communication of literature in initial period, which is the literature transmission, refers to using stone, metal and other hard objects on the oracle bones, bronze, stele carving and painting of information. They in space have great limitations. Secondly, it is transcribing communication and printing communication. Transcribing communication takes bamboo, wood, silk and paper as recording carriers. Their emergence greatly improves the speed of information circulation and the efficiency of literature use in a large range. The appearance of printing made it possible to circulate literature widely. It not only reduced the cost of literature dissemination, but also broke the limitation of literature in time and space, providing rich and diversified samples for historical research [5]. The development of information technology has led to the transformation of literature communication mode, entering the stage of electronic literature communication. Compared with the former three convergence of traditional literature, the text as the main body not only will the image, sound, animation, and other forms of literature information in the form of digital code stored on magnetic, light, electricity and other medium, by building a new database with the model, such as computer equipment to read and extract information, and the spread of the rapid development of Internet technology and make them more quickly and easily. If traditional literature is based on simulation, record information of text and pictures as main means, then it is recorded a relatively independent individual. It is difficult to take the initiative to contact with other information, and it is presented a state of “atoms” of information, the information not only on the outside with other literature association, in its internal also presents certain fragmentation phenomenon. The emergence of electronic communication makes the establishment of complete databases and models possible. In this process, the collection, sorting and digestibility processing of literature itself is an important link of historical research. It collects documents scattered everywhere and stores them in digital code, which makes the documents shift in space, and then classifies and statistics the numerous documents gathered together, carries out database construction, and completes the first step of information transmission processing. In this stage, the academic circle used a lot, and produced a number of excellent achievements, which made the rare literature that could not be widely circulated for various reasons spread widely. The emergence of them has pushed the historical literature research into a new stage, and the development of big data and artificial intelligence technology has made this trend more obvious.
528
W. Li
3 Construction and Application of Historical Literature Model in the Context of Big Data The significance of historical literature research lies in Text Mining, which is an effective way to obtain valuable information from numerous text data. However, text mining is more about Data Mining on the text, so as to find the event links hidden behind the data. At present, the use of historical big data has been relatively common, and has been widely used in historical geography, historical economics, historical population research and many other aspects. The establishment and use of this database greatly enriches the paradigm of historical research and lays a foundation for the construction and use of new historical literature models [6]. At present, the use of big data in the historical circle is generally divided into two categories: bibliographic database and quantitative database. Literature data, as a preliminary way to sort out the literature, is simply to scan or transform all kinds of collected literature materials for storage. It is not only a compilation of a single document that was previously “atomized”, but also its scope of use is in the priority areas of retrieval, remote reference, knowledge sharing, etc. It does not form an effective system, let alone any computer analysis function. At this stage, people still play the leading role in research, while literature communication only changes from traditional media to electronic media with greater information storage capacity. Such as digital Dunhuang project, it give full play to the advantages of modern science and technology, with laser scanning, 3D reconstruction and virtual reality technology will left Dunhuang murals, sculptures, books, etc. in the form of digital summary, will all cultural relics and literature related to Dunhuang and made into high quality digital image, formed a set of original documents and the related research unified data link [2]. The research aimed at digital data gives birth to a quantitative database that can be analyzed by computer, and the historical document material database built by it can be analyzed and counted by computer. Different from literature database, it does not seek the storage, reproduction and dissemination of literature, but relies on quantitative database to carry out statistical analysis on numerous data materials, so as to find new discoveries that cannot be obtained by traditional means after hiding literature materials. Because quantitative database is made up of “all kinds of covers a certain area, has a certain time span of the integrity of large personal or other micro-level information system” [7], hand information, so it has the strong ability of ductility and refactoring can make many scattered the fragmentation of information was quickly over the link, integration, thus come to the conclusion. This targeted, cross-category and cross-temporal literature system not only supports rapid data collection, but also facilitates large-scale research and paradigm discovery. Most of its research paths follow the retrieval – data acquisition – visual statistical analysis – pattern, and strive to obtain objective answers on top of the overall data. Thus we can see the appearance of big data is of great significance for historical research. It is not only change by literature dissemination way, is also a new mode of transmission of the change of historical research paradigm, it enriched and perfected the historical research, effective utilization of the many scattered literature, to realize
Reconstruction and Reproduction: The Construction of Historical Literature Model
529
the link between different individual material and integration, intelligent appearance has laid a solid foundation for the data.
4 The Role of Data Intelligence in the Construction of Historical Literature Model The emergence of big data technology makes information-based historiography become a reality and forms a relatively mature research system, such as information or keyword retrieval based on literature database, quantitative analysis of large-scale historical data based on quantitative database, and introduction and mapping of GIS technology based on the combination of big data and computer technology. It also can be seen that literature research has gradually developed from the study of literature data itself to the stage of relying on literature data to demonstrate the objective existence. Such progress undoubtedly improves the use efficiency and dissemination degree of literature, but it still only stays on the objective use of data, and fails to make use of computer database and depth algorithm to provide decision-making guidance with reference significance and value. If we say that big data is to break the time and space limitations of historical documents by making use of the changes in communication technologies and modes, provide a platform for their internal connections, integrate multiple and dispersed textual information, take researchers as the leading role, and treat documents as research objects, so as to conduct historical research. Data intelligence, on the other hand, highlights the subjectivity of literature, and takes artificial intelligence marked by deep learning as technical support to further deepen big data research and enter the era of big data analysis methodology as the center, making “knowledge discovery by database” a reality within reach. This change can be seen as the shift from the research concept of people-centered and literature oriented to the research concept of big data based and artificial intelligence as the core driving force for development. We can clearly see that the difference between data intelligence and big data is that it is a predictive data analysis technology oriented to big data. It is not satisfied with the retrieval of literature data or feature analysis, but has the capability of autonomous analysis. The difference between data intelligence and big data is that it is a predictive data analysis technology for big data. It is not satisfied with the retrieval of literature data or feature analysis, because it has autonomous analysis [8]. This analytical capability is sufficient to provide certain decisions or hypotheses for research results, which is a step closer than simply using the literature. Especially in the variety of depth of neural network structure is put forward, the intelligent data greatly solve the problem of the history of literature model construction, because it can adapt to different areas of different modeling requirements, and through continuous training and optimization, improve the precision of ourselves, such as attention network, loop neural network, such as neural networks, convolution will text, time series, unstructured data such as image under its research Angle of view, From the traditional literature communication to the era of big data, the way of literature communication has undergone a great change, but there is no new breakthrough and innovation in the literature communication itself. Simple retrieval and statistics still cannot solve the problem of reading
530
W. Li
information and delivering fragmented information, which has become a bottleneck restricting researchers’ cognition of literature. Data intelligence has the use of historical literature analysis model to collect information, storage information, the ability of information, it is concluded that hypothesis, by the ability of data acquisition and calculation, can active screening from the voluminous literature comparison, abstract information according to the internal logic of the heterogeneous data re-factoring, and reproduce the historical appearance, make it can be widely used. In the silk road history research, for example, for example, in the database to s, time, geographical distribution, historical figures for different categories will be classified storage, historical documents and then through the model of data intelligent characteristics compared with historical data interpretation, after depth arithmetic and text reading comprehension, presents the influence of the different historical period, the silk road, its core lies in the effective contact each other, to provide literature between reappearance original as much as possible [9].
5 Conclusion We can see that although the historical literature is the product of history, the change of communication mode is also profoundly affecting and changing the historical literature research paradigm itself. In this process, literature research has experienced from traditional literature research to literature research in the era of big data, and then to literature research under the data intelligence. This transformation is not only the transformation of technology mode, but also the transformation of research thinking and research orientation. This research paradigm with a holistic view is conducive to integrating scattered historical documents, mining the hidden historical truth behind the data, and providing a relatively mature research model of historical documents. After a variety of deep neural network structure depth algorithms, the combination of big data and artificial intelligence unique in this literature can provide decision suggestions and prediction models for research topics. Moreover, the establishment of such a large comprehensive intelligent database has strong openness and data sharing, which is conducive to the intervention of knowledge from different disciplines and makes up for the solidified research paradigm from an interdisciplinary perspective.
References 1. Liang, R.: Big data: as a basic method of historical research. Nanjing Soc. Sci. (06), 151–156 (2019) 2. Zhang, N.: Application of big data technology in the study of the history of the silk road. J. Inner Mongolia TV Univ. (06), 28–31 (2019) 3. Xiong, J.: Quantitative history: a new paradigm of economic history. Quest (03), 47–54 (2019) 4. Chen, Z.: Big data era and econometric research of economic history. China Econ. Hist. Res. (06), 53–58 (2016) 5. Zhu, C.: A discussion on “literature communication and historical research”. Anhui Hist. (01), 17–21 (2019)
Reconstruction and Reproduction: The Construction of Historical Literature Model
531
6. Liang, C.: New methods for historical research in the era of big data. Chinese Soc. Sci. J. (005), 89–95 (2016) 7. Liang, C.: Quantitative database: the key to promoting historical research through “digital humanities”. Jiang Hai J. (02), 162–164+239 (2017) 8. Wu, J., Liu, G., Wang, J., Zuo, Y., Bu, H., Lin, H.: Data intelligence: trends and challenges. Syst. Eng. Theory Pract. 40(08), 2116–2149 (2020) 9. Yu, H.: J. Jilin Inst. Educ. 35(08), 161–164 (2019). (in Chinese)
Digital Turning of Logic and Practical Paradigm: The Establishment of Big Data Model in Anthropological Field Zhuoma Sangjin1 and Liang Yan2(&) 1
The Second Primary School of Deqin County, Diqing, Yunnan, China 2 College of Arts, Tibet University, Lasa, China [email protected]
Abstract. The anthropological field driven by big data not only brings the sense of scientific research community, but also brings the transformation of logic theory and practice in innovative research theory and methods. As a paradigm shift, Digital Humanities has brought new theories and methods to traditional anthropology; On the one hand, it has realized the “horizon fusion” between horizontal disciplines at the theoretical level. This interactive integration not only brings about the omniscient and wide-angle vision of human, object and image in the space-time field of anthropological research objects, but also constructs a vertical and in-depth repetitive mining of research resources rich context by using research media technology; on the other hand, From open login of big data, related login of big data, to core login of big data in the field of humanities, this process is one of multidimensional coexistence; therefore, in the field of anthropology, this research theory and practical method can realize horizontal and vertical data collection, in-depth research and analysis, and threedimensional interactive display. Keywords: Big data
Field process Paradigm shift Model construction
1 Research Background In terms of its nature, fieldwork is the “trademark” of anthropology, and “participation in observation” [1] has become the exclusive concept and method of ethnography, and is also specific behavior of “dissecting sparrow”. Its research objective is to analyze, explain and express the participatory nature of time and position (theme/guest) in field space (field point). For field objects, big data is a research methodology that collects data and materials through in-depth and intensive research on field situations. Through continuous comparison of data, it makes abstract and conceptual thinking and analysis, summarizes and defines concepts and categories from data, and constructs theory, structural analysis and in-depth interpretation on this basis. The big data in anthropological field “is the general methodology of data development theory based on systematic collection and analysis”. This research shows that “into your data (in the process of data collection and analysis, ‘living’ or ‘hanging out there for a while, and then based on the information to develop the understanding of the phenomenon” [2]. And another of this understanding is in essence as the research main body (medium) © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 532–537, 2021. https://doi.org/10.1007/978-981-33-4572-0_77
Digital Turning of Logic and Practical Paradigm: The Establishment of Big Data Model
533
combined with large data fields, namely the application of pluralistic modernity and means, science and technology in a panoramic view of the field will all hidden in the human and the nature of time and space of the information in the media ecology, through individual acquisition, coding, output three steps to finish the big data cultural turn in the anthropology field. “As much as possible, we should have rich and detailed information, and we should not let go of any observed details. The source of data can be direct experience observation, interview, literature, historical information, personal experience, etc.”, while big data is further formed as a “method of manufacturing, processing, transmitting and storing information” [3]. In the field process and field of occupying data as much as possible the supporting technology and function integration of data are also discussed. “The way people deal with things is, first of all, one of the elements of the objective order making system of the specific meaning of human beings in the process of living in reality, rather than just a plane subjective symbolic relationship”. Therefore, the “construction order” [4] in anthropological field actually originates from the existence of “things” in the field space field, and this kind of existence shows that “human beings historically constitute a specific through concrete practice”. The vector order of human social existence in the material existence hierarchy also implies the logic order philosophy of human/material/technology three dimensions in the dynamic process of field research. The concept of construction comes from the study of construction grammar. In short, a construction is a language structure with a specific and unique meaning and function. In other words, construction is a “structure” bound with a specific meaning. After the completion of the humanistic creation of anthropological big data, the “construction” in terms of time dimension, practical operation and man-machine cooperation in the process of linear field investigation is particularly important. The field “construction” of big data takes the specific link field in the difference time sequence as the main line, which shows “four inheritance connections of Construction: polysemy connection, sub part connection, instance connection and metaphor connection” in specific time/space. Firstly, polysemy refers to the relevance between the specific meaning of the “type object” [5] construction in the field and the “big data” technology based on the meaning of the “object”, which can expand the nature of the relationship between the acquisition and analysis meanings to a certain extent. Secondly, when the constructions between people/objects in two fields are connected by “object” metaphorical mapping, big data technology can not only discover and mine this “metaphor”, but also reveal the complex “tuber” rich information hidden underground due to the diversity of big data technology and the horizontal expansion and connection between human/object/digital technology, so as to collect and integrate the complex “tuber” rich information hidden underground analysis. Thirdly, sub part connection: an independent construction of the field big data acquisition object is an “inherent part” [6] of another digital acquisition object construction, and there is a horizontal/vertical structure association between them, and each big data acquisition object is an independent content entity. Fourth, instance connection: when a specific field big data acquisition object construction is a special instance of another field big data acquisition object construction, the connection is called big data object instance connection.
534
Z. Sangjin and L. Yan
2 Logic and Theory of Big Data in Anthropological Fieldwork The creation, construction and modeling of big data in form fields emphasize the heterogeneity and Isomorphism of human (body) and practical technology in natural ecosystem, social cultural system, and media ecosystem in the comprehensive field environment. The so-called environmental system of man/machine in the field is the system under the environment of human living space. The so-called environmental system in the field, which is separated from human or computer, is the system under the environment of human living space. The dynamic cultural system contained in it is actually a living organism and an ecosystem, as well as a part of the whole social ecosystem. This system interacts, competes and USES with other social ecology, culture and media subsystems, and is influenced by many external forces such as politics, economy, culture and local knowledge. Therefore, this system promotes the coordination and connection between the micro-system and the meso-system of the environment and the macro-system of the field, and achieves some balance and harmony through the exchange of information, energy and resources. This kind of peace and harmony makes the big data creation, construction and modeling of the field continuously expanded through the main/objective switching human/machine intervention and overlap. This kind of calm and harmony continuously expands the scope of large datalized sequences, construction styles and models in the field through the main/guest switching human/computer intervention and coincidence. In this case, the humanistic study of any subject will move from narrow fields to disciplines such as cultureless, philosophy, anthropology, mythology, etc., and finally complete the big data-oriented “humanistic turn” [7] in the mutual infiltration with literature and art fields. This shift has realized the new values and cognitive theory: “it is not only a knowledge, it may replace the original physics and become a new era of world outlook, cosmology, a new concept of existence, an old and fresh aesthetic concept” under the dynamic order of creation, construction and model. 2.1
Open Login of Big Data in Anthropological Field
From the perspective of big data, “open coding” [8] is the process of decomposing, crunching, comparing, reviewing and reorganizing interview data, exploring the concepts therein, and further condensing the concepts into categories. In this process, in order to ensure the accuracy of the refined concepts and categories, it is also necessary to define the nature of the category and the attributes of various properties. In this process, to ensure the accuracy of the refined concepts and categories, the nature of categories needs to be defined”. In order to realize the big data in the field process, the open login of this database is the technical collection and coding of neutral and objective qualitative data for all the heterogeneous and isomorphic data scattered around the core of the original data, that is, pointing to the final research object.
Digital Turning of Logic and Practical Paradigm: The Establishment of Big Data Model
2.2
535
Related Login of Big Data in Anthropological Field
In the process of anthropological fieldwork, the main task of “secondary coding” (also known as relational logging or axis logging) [9] is to discover and establish various connections between conceptual genera, so as to show the organic relations among various parts of the data. These relationships can be causal, temporal, semantic, situational, similarity, difference, equivalence, type, structure, function, process and strategy. Taking “relationship” as the axial object, this paper tries to find out the interrelationship between the internal and external events such as villages and ethnic groups and “core culture”, and finds and establishes the logical connection between the categories of core culture concepts, so as to form a more general and comprehensive cultural abstract data code. 2.3
Core Login of Anthropological Field Big Data
Compared with the category of first and second level coding in field process, the third level coding aims at the “cultural data” of field investigation. It is more targeted and important. Firstly, this code occupies a central position among all the genera, and has a significant correlation with other genera in a more centralized field environment, and becomes the core of big data information; Secondly, the data core of field data knowledge, including phenomenon, text, oral transmission, object, event, etc., constructs the content of core meaning with maximum frequency and density; Thirdly, in the process of qualitative/quantitative research, big data information, which is in the core and dominates, is rich in correlation with other categories, so it is easier to develop into a general formal theory. In the case of the difference allowed in the internal category of the theory, “because researchers are constantly coding, logging in and adjusting its dimensions, attributes, conditions, consequences and strategies” [6], the classification of the theory has rich and complex attributes.
3 Big Data Practice Path in the Process of Anthropological Fieldwork The application of big data humanistic technology in anthropological field requires “indepth situation collection of data, continuous comparison of data, thinking and analysis of data abstraction and conceptualization, refining concepts and categories from data, and constructing theory. Through the main steps of data collection, open coding, continuous comparison, selective coding, theoretical memorandum, theoretical coding and delayed literature review, a complete and independent “practical method” is formed. 3.1
Geographic Space
It is the basis and premise of field big data to locate and collect geospatial measurement data through digital technology. The modern digital technology 3S (GIS, GPS, RS) can be used to obtain more timely and accurate data on geographical location, topography
536
Z. Sangjin and L. Yan
and geology, geographic structure, water resources and so on. It can not only store, manage and limit the transmission of the space, cultural attribute and measurable data of the whole culture, but also provide various necessary dynamic expression and analysis forms according to the nature and need of the data. 3.2
Customs
It mainly uses big data humanistic technology images, images, audio (oral recording) to capture the behavior pointing to the core of anthropological field culture, so as to conduct quantitative/qualitative analysis of the source through behavior. This is because “national customs and habits generally have the form of behavior psychology and behavior mode, and the behavior psychology is expressed through certain behavior ways. A certain way of behavior reflects the specific behavior psychology of the nation. The preferences, avocations and taboo shown in the national customs reflect the certain behavioral psychology of the nation, but they are also reflected through specific rituals and activities. 3.3
Local Knowledge
In the sense of methodology, it emphasizes a characteristic different from the totality, and uses comparison and deep description in specific research to “consciously follow the internal vision of cultural holders” to interpret culture. It shows the possible complex relationship between the “signifier” and “signifier” of cultural symbols, and traces the source of real knowledge evolution through the localization of complex relations. The combination of digital technology and local knowledge is mainly reflected in two aspects. On the one hand, big data humanistic technology is used to collect the relevant cognitive video, audio and video data of local people for quantitative/qualitative analysis compared with the field process (ceremony); on the other hand, based on the signifier and signified structure, the digital technology is used for the implements and types that are always used in the field process In order to trace the source.
4 Conclusion As a paradigm shift, big data in anthropological field has brought new theories and methods to traditional anthropology. On the one hand, it realizes the “horizon fusion” between horizontal disciplines at the theoretical level. This interactive integration not only brings the omniscient and wide-angle vision of human object image in the spacetime field of anthropological research objects, but also constructs a vertical and indepth repeat-ability mining of research resources rich context by using research big data technology. On the other hand, the research method is based on the multidimensional architecture of “fit principle”. In fact, this is the evolution of anthropological grounded theory. It is pointing to the object of study of the logic and method combined with multiple digital technology: carry genes can make quantitative/qualitative method “ [10], the researchers could not only in the real world
Digital Turning of Logic and Practical Paradigm: The Establishment of Big Data Model
537
as the research object, and to be able to rely on tools to obtain or simulation of scientific data, and finally be able to use data tool for statistics and calculation and the analysis of the content”. Acknowledgments. This work was financially supported by Wuhan University-Tibet University special project of “Tibet economic and social development and plateau scientific research coconstruction and innovation fund”: Zhouyi and Fengshui: A Study on the philosophy, spatial model and historical experience of Tibetan ancient construction culture, project numberlzj2020014.
References 1. Glaser, B.G.: The Cry for Help: Preserving Autonomy Doing GT Research, p. 2. Sociology Press, Mill Valley (2016) 2. Clarke, A.E., Frises, C.E., Washburn, R.S.: Situational Analysis: Grounded Theory After the Interpretive Turn, 2nd ed., pp. 2–7. Sage Publications Ltd, Thousand Oaks (2017) 3. Xu, J.J., Zheng, K., Chi, M.M., Zhu, Y.Y., Yu, X.H., Zhou, X.F.: Trajectory big data: data, applications and techniques. J. Commun. 36(12), 97–105 (2015). (in Chinese with English abstract) 4. Zeng, M.L.: Smart data for digital humanities. J. Data Inf. Sci. 2(1), 1–12 (2017) 5. McCarty, W., Short, H.: Mapping the field[C/OL]. Paper given at an ALLC meeting in Pisa, 20 March 2017 (2002). https://www.allc.org/node/188 6. Unsworth, J.: Scholarly Primitives: what methods do humanities researchers have in common, and how might our tools reflect this? [EB/OL], 20 March 2017. https://people. virginia.edu/jmu2m/Kings.5-00/primitives.html 7. Stoehr, E.L.: Interdisciplining digital humanities: boundary work in an emerging field by Julie Thompson Klein(review). Rocky Mountain Rev. Lang. Lit. (1), 23–24 (2016) 8. Alison, A.: The ‘time machine’ reconstructing ancient Venice’s social networks. Nature 546 (7658), 341–344 (2017) 9. Berry, D.M., Fagerjrod, A.: Digital Humanities: Knowledge and Critique in a Digital Age, pp. 114–115. Polity Press, Cambridge (2017) 10. Kolay, S.: Cultural heritage preservation of traditional Indian art through virtual new-media. Procedia – Soc. Behav. Sci. 225(14), 309–320 (2016). (in Chinese)
Simulation-Testing and Verification of the Economical Operation of Power Distribution Network Coordinated with Flexible Switch Zhenning Fan1, Qiang Su1, Xinmin Zhang1(&), Changwei Zhao1, and Ke Xu2 1
State Grid Tianjin Chengdong Electric Power Company, Tianjin, China [email protected] 2 State Grid Tianjin Electric Power Company, Tianjin, China
Abstract. At present, in the process of optimizing distribution network system, economy is taken as the starting point and modern technical means are comprehensively used to realize the reliability and safety of operation mode. The operation of distribution network based on flexible switch is the result of technology development in the new era. In this paper, based on the historical background of modern science and technology development, it analyzes the application of flexible switch synthesis coordination in the economic operation of distribution network. Firstly, it analyzes the current research status of distribution network with the coordination of flexible switches synthesis, and then optimizes the operation mode of traditional distribution network, and again, explores the flexible representation of distribution network in the operation process, optimizes the flexible synthesis of distribution network operation mode, finally completes the verification analysis. Keywords: Flexible switch Power distribution network Optimization model
1 Introduction Economic development drives the improvement of social productivity level, and electric power resource is an important supporting force of social development. With the deepening of smart power grid, the society has put forward higher requirements for the reliability, safety and economy of distribution network in operation. In the process of operation, the distribution network is divided into: fault state, maintenance state and normal operation state according to its operation state. In terms of each running state, there is the best running mode from the theoretical point of view. Under the optimal operation mode, the voltage level and network loss of all nodes of the distribution system have advantages compared with other schemes. Optimizing the operation mode of distribution network mainly refers to adjusting all the power equipment in the distribution network, changing the operation mode of distribution network in the process, and then realizing the optimal operation state of distribution network. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 538–546, 2021. https://doi.org/10.1007/978-981-33-4572-0_78
Simulation-Testing and Verification of the Economical Operation
539
Multi-terminal AC-DC distribution interconnection will become an important form of future distribution network. In the multi-terminal AC-DC distribution system, if the predetermined power of AC-DC distribution converter exceeds the limit, the DC network may be unstable. In order to overcome these limitations and to avoid the unstable situation appeared in the process of coordinated control, W Deng, W Pei, Li L [1] have proposed one universal, active, stable method used for low-voltage and multi-terminal AC/DC power distribution hybrid system and the research result shows that only low stable power with little impact on the dynamics of the DC network can improve the stability of the system and ensure the stability of the system voltage. AC-DC distribution systems have recently gained tremendous popularity due to the advances in power converters, high penetration of renewable energy sources, and the widespread use of DC loads. However, the power flow in such a system is a challenging task due to the non-linear characteristic of the power converter. Yin Lu, Yi Shuxian Zhang Kai [2] has taken advantage of the concept of graph theory and matrix algebra to propose a power flow algorithm suitable for AC-DC power distribution system. Four matrices, load outside the branch matrix, path impedance matrix, path descent matrix and relaxation bus to other bus descent matrix are developed, and the power flow solution is obtained through simple matrix calculation. These matrices reveal the network topology and information about the behavior of AC/DC power distribution network during the study period of power flow. Compared with traditional power flow methods for AC/DC power distribution systems based on flexible switches, the proposed technique does not require any up-down decomposition, matrix inversion and forward-backward substitution of Jacobian matrix, for which reasons, the developed technique is computationally efficient. The proposed approach is tested using multiple case studies of the AC-DC distribution network, which includes different modes of operation for various power converters. The result shows the feasibility and authenticity of this method.
2 Measures to Optimize the Operation of Traditional Distribution Networks 2.1
Design and Analysis of Objective Function
In the process of building objective function, the network loss index should be selected to ensure that the economy and feasibility of distribution network operation can be directly reflected. In general, the network loss of distribution network refers to the iron loss and steel loss of all transformers on the distribution line, and also includes the loss of wires [3]. In the process of operation mode optimization, generally only the loss located in the distribution network lines can be changed. Therefore, when the objective function is represented, the following formula is expanded: minf ¼
XN Ri ðP2 þ Q2 Þ i i i¼1 Vi2
ð1Þ
540
Z. Fan et al.
In Formula (1), N represents the total number of branches in the distribution network. The meaning denoted by Ri is the resistance value presented in the branch i. The meaning expressed by Pi is the active power flowing over the branch i. The meaning denoted by Qi is the reactive power flowing through the branch i. 2.2
Analysis of Constraint Condition
In the operation of distribution network, its constraints are mainly reliability constraints and safety constraints. In general, it includes node-injected power balance equation constraint, node-injected voltage constraint and branch power flow constraint [4]. (1) Node-injected Power Balance Equation Constraint When running distribution network, the constraint of reactive power balance equation and active power balance equation must be satisfied: (
PGi PLi Vi QGi QLi Vi
X X
j¼i Vj ðGij
cos hij þ Bij sinij Þ ¼ 0
j¼i Vj ðGij
cos hij þ Bij sinij Þ ¼ 0
ð2Þ
In Formula (2), the meaning of PGi is the active power emitted at node i. QGi represents the reactive power emitted at node i. PLi refers to the active load power at the position of node i. The meaning represented by QLi is the reactive load power at the position of node i. Denoted by Vi is the voltage amplitude at the position of node i. The meaning of Gij is the conductance between node i and node j. The meaning of hij is the phase angle difference between node i and node j. (2) Node-Injected Voltage Constraint In order to fully guarantee the power supply quality and safety performance of the distribution system during operation, it is necessary to fully combine with the actual situation of the distribution network [5] to guarantee the deviation of voltage amplitude of all nodes controlled within the standard range stipulated by the state. Therefore, the voltage amplitude of all nodes should always meet the following conditions: Vmin Vi Vi;max ; i ¼ 1; 2; 3; 4. . .. . .; N
ð3Þ
In Formula (3), the meaning represented by Vi is the voltage amplitude of node i. Vi, min means the lower limiting value and Vi, Max is the upper limiting value. (3) Branch Power Flow Constraint Due to the limitation of the carrying capacity of the electric conductor, the power flowing from all branches must be within the specified range. Si Si;max ; i ¼ 1; 2; 3; . . .. . .; N
ð4Þ
In Formula (4), the meaning expressed by Si is the power on the i-branch and the meaning expressed by Si, Max is the limitation on the i-branch.
Simulation-Testing and Verification of the Economical Operation
541
3 Flexible Representation of Distribution Network Operation Constraints During the operation process of distribution network, it is necessary to ensure that the boundary is no longer a clear boundary, but a boundary area; meanwhile, it is also necessary to ensure that all the direct boundary areas in the boundary area may become the boundary that keeps the smooth operation of the power network under a specific environment [6]. However, the variation range of the boundary is not arbitrary, but has a limitation value, which can be used as the most intuitive data in the process of measuring the flexibility of the distribution system. Due to different constraints, the characteristics of the distribution network in the operation process are also different, so the constraints of the optimization method of the operation mode of the distribution network under the traditional mode are expressed as a flexible form [7]. 3.1
Flexible Constraint of Node-Injected Power Balance Equation
Under the traditional distribution network model, node-injected load is basically expressed by using a constant mode, which makes the mode of node-injected power balance become rigid to some extent [8]. As far as smart distribution network is concerned, in order to control peak load cutting, the node-injected load power must be reasonably controlled, and then the controllable variable can be set as the node-injected load. The flexible constraint of the node-injected power balance equation can be expressed as:
P PGi Vi P j2i Vj Gij cos hij þ Bij sin hij ¼ PLi þ dLi DPLi QGi Vi j2i Vj Gij cos hij þ Bij sin hij ¼ QLi þ dLi DQLi
ð5Þ
In Formula (5), the meaning expressed by DPli is the active load power deviation value at the position of node i. The meaning expressed by DQli is the reactive load power deviation value at the position of node i. The meaning of dLi is the load flexible expenditure at the position of node i, which value range is from 0 to 1. The main meaning of setting the load flexibility index is that there is a relatively large relation between the function value of the network loss of the distribution system and the dLi at the position of node i etc. other nodes at other position, from which can show that the power distribution system has relatively poor economy for the load power supply of this node. By reasonably controlling the node load at this position and keeping it within a certain range, the reliability and economy of the distribution network in the overall operation process can be maximized [9, 10]. 3.2
Flexible Constraint of Node Voltage
The flexible parameter is introduced, and the flexible representation of Formula (4) is as shown below:
542
Z. Fan et al.
Vi;min dVi Di;min Vi Vi;min þ dVi DVi;min
ð6Þ
In Formula (6), the meaning expressed by DVi, max is the maximum allowable overstep limit of Vi, max and the DVi, min is the minimum allowable overstep limit of Vi, min. dvi means the voltage flexibility index at the position of node i, which value range is from 0 to 1. 3.3
Flexible Constraint of Branch Power Flow
Introduce the flexible parameter into Formula (6), which formula is as shown below: Si Si;max þ dFi DSi
ð7Þ
Formula (7), the meaning expressed by DSi,max is the maximum allowable overstep limit of power flow at the branch i. The meaning represented by dFi is the power flow flexibility index at the position of branch i, which value range is from 0 to 1. According to Formula (4) and Formula (5), it can be known that in the process of optimizing the operation mode of distribution network, the actual situation of distribution system should be taken as the basis to rationalize the reduction or expansion of constraint boundary, so as to comprehensively improve the economy, security and reliability of distribution network in the operation process.
4 Flexible Evaluation Method of Power Distribution Network in Operation Process Based on the operation constraints of distribution network, flexibility can be expressed by selecting various flexible indexes to evaluate the flexibility of distribution network system in the process of operation; meanwhile, numerical indexes should be made full use of when measuring the flexibility. If starting only from the perspective of distribution network operation to ensure relatively large flexibility of distribution network in the process of operation, the following model can be established in the process of analysis. In Formula (8), the meaning expressed by Dg is the deviation of the boundary value of the equality constraint. The meaning expressed by h is the upper limit value of the inequality constraint. The meaning represented by is the lower limit of the inequality constraint. Dh means the deviation of the upper and lower limits of the inequality constraint from the constraint boundary value. maxðd 8 g þ dh Þ gðx; uÞ ¼ dg Dg > > < h dh Dh hðx; uÞ h þ dh Dh s:t: 0 dg 1 > > : 0 dh 1
ð8Þ
Simulation-Testing and Verification of the Economical Operation
543
In the system, the meaning represented by dg is the flexible evaluation index. When the index increases, it fully indicates that the system has a larger feasible scope in the current operation process, and the system will also be more and more flexible in the operation process, which can strengthen the ability to deal with a variety of uncertain factors.
5 Analysis of Model Verification By using the optimized IEEE 69 node-injected system, the operation effectiveness and reliability of the distribution network proposed in this paper are analyzed. In the process of using the system, its voltage level is controlled at 10 kV with 2 of power nodes, total 67 of the lines, 6 of contact switches and 67 of segment switches. Under the premise of normal operation state, the optimization of its operation mode is mainly carried out by optimizing flexible synthesis and traditional optimization methods. What represented in Table 1 is the result of model optimization. Table 1. Optimization results under the traditional optimization mode Status Record of on-off switches Network loss/kW Before optimization 7-8 39-48 26-54 16-69 14-21 12-66 234.15 After optimization 18-19 14-21 39-48 10-42 11-12 12-66 201.32
To optimize the operation mode of distribution system, the flexible comprehensive optimization model should be selected, and the simplification should be carried out according to the corresponding principles. The minimum expected network loss is 234.15 kW, meanwhile, the maximum line loss rate allowed in the distribution network is 10% and the minimum expected network loss is 146.05 kW. When the operation condition is normal, the load flexibility existing in all nodes should be ignored, and the voltage flexibility index values of all nodes should be assumed to be equal, apart from that, all branches also need to be assumed that they have the same power flow flexibility index. The results obtained by the comprehensive optimization model of optimization flexibility are shown in the following Table 2. Table 2. Optimization results under the comprehensive optimization mode Status Record of on-off switches Network loss/kW Before optimization 7-8 39-48 26-54 16-69 14-21 12-66 234.15 After optimization 7-8 14-21 39-48 20-21 14-15 12-66 202.81
As can be seen from Table 1 and Table 2, compared with the network loss result obtained by the flexible comprehensive optimization model, the network loss result obtained by the traditional optimization model is smaller, but with a small difference of
544
Z. Fan et al.
0.741%, from which result, it can be known that in the network loss analysis, flexible analysis can reduce the economy. On the other hand, it is necessary to analyze the reliability and safety of the traditional optimization model and the flexible integrated optimization model in the power supply system. By using the following statistical indicators to complete the two optimal operation modes of all the node-injected voltage levels; (1) Mean Value: ¼ V
PN k¼1
Vk
ð9Þ
N
(2) Range: RV ¼ Vk;mac Vk;min
ð10Þ
(3) Standard Deviation: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PN k¼1 ðVk VÞ rU ¼ N1
ð11Þ
(4) Maximum Deviation Ratio: Dmax V
¼ max
Vk V N
ð12Þ
k
VkN
(5) Mean deviation ratio: Dave V
Vk VkN 1 XN ¼ k¼1 N VkN
ð13Þ
Table 3. Obtained results of voltage level of all nodes under two optimization models
0.016
Maximum deviation ratio 5.3%
Mean deviation ratio 2.7%
0.014
3.6%
1.8%
Statistical index
Mean value
Range
Standard deviation
Voltage level of all nodes of traditional optimization result Voltage level of all nodes of flexible comprehensive optimization result
1.034
0.08
1.017
0.06
Simulation-Testing and Verification of the Economical Operation
545
In Table 3, under the operation mode of traditional optimization model and flexible comprehensive optimization model, the distribution of voltage level of all nodes is expressed. According to Table 3, compared with the traditional optimization model, the final result of the flexible comprehensive optimization model is that the voltage amplitude of each node is more close to the rated value, furthermore, according to the results of the deviation rate, the deviation rate of the flexible comprehensive optimization model is significantly better than that of the traditional optimization model. The results show that the reliability and safety of the optimization results can be greatly improved, and as far as all the branch flow is concerned, the conclusion is similar. To sum up, compared with the traditional optimization methods, the flexible comprehensive optimization method is more feasible, which can realize the optimal operation mode with flexible characteristics, and is suitable for the application in the uncertain changing environment of distribution network. At the same time, the application of flexible synthesis method can influence the economy of distribution network in the operation process, greatly improving the reliability and security of power supply as well as realizing the comprehensive optimal operation mode.
6 Conclusion In order to solve the problems existing in the power distribution system, this paper analyzes the method of comprehensive coordination, in which process, the smart grid flexible analysis method is used to realize the boundary flexibility of the operation state of distribution network. If the parameters or structure of the distribution network system have any change during the operation process, the operating boundary will also change, which will improve the flexibility of the distribution network and affect its economy in the operation process. Acknowledgment. This work was supported by the research project of State Grid Tianjin Electric Power Company under Grant KJ20-1-09: research of AC/DC distribution system based on flexible switch.
References 1. Wei, D., Wei, P., Luyang, L.: Active stabilization control of multi-terminal AC/DC hybrid system based on flexible low-voltage dc power distribution. Energies 03, 502 (2018) 2. Lu, Y., Yi, S., Zhang, K., et al.: Operation model of 10 kV distribution network based on flexible looped-network control device. Electr. Power Autom. Equip. 01, 137–142 (2018) 3. Cong, C., Tang, W., Lou, C., et al.: Coordinated and optimized control of flexible soft switch and contact switch of active distribution network with high-permeability renewable energy in two stages. J. Trans. China Electrotech. Soc. 06, 149–158 (2019) 4. Wang, X.: Research on the integrated adjustment methods for the closed-loop voltage of power distribution network based on distributed power generation and flexible load. Electrotech. Appl. 13, 88–89 (2019)
546
Z. Fan et al.
5. Li, Z., Deng Li, L., Yu, L., et al.: Research on the active power flow control of distribution network based on flexible switch. Electrotech. Appl. 04, 20–24 (2018) 6. Kaixin, L., Weihong, Y., Dan, W., et al.: Research on renewable energy scheduling and node aggregated demand response strategy based on distribution network power flow tracking. Proc. CSEE 32, 90–91 (2018) 7. Zhu, X., Tong, N., Lin, X., et al.: The online emergency load transfer strategy for the active distribution network based on flexible multimode switches. J. Autom. Electr. Syst. 24, 87–95 (2019) 8. Tan, Z., Xu, Y., Chen, Z., et al.: Architecture analysis and simulation study of flexible power-distribution network access to distributed power supply. J. Electr. Power Sci. Eng. 06, 1–7 (2019) 9. Gu, K.: Research on the cooperation-competition mechanism of microgrid and active power distribution network based on game theory. IOP Conf. Ser. Earth Environ. Sci. 12, 91–92 (2019) 10. Nainar, K., Pillai, J.R., Bak-Jensen, B., et al.: Predictive control of flexible resources for demand response in active distribution networks. IEEE Trans. Power Syst. 10, 1 (2019)
Research on the Development Strategy of Intra-city Clothing Distribution Based on O2O Mode Tong Zhang, Na Wei, and Aizhen Li(&) College of Textiles and Clothing, Qingdao University, Qingdao, Shandong, China [email protected]
Abstract. This article starts a research on the development of intra-city clothing distribution under the O2O model. This article mainly analyzes the O2O model and the meaning, status quo and characteristics of intra-city delivery under this model. At the same time, it also focuses on the current situation, characteristics and existing problems of clothing intra-city distribution under the O2O model, combined with existing typical cases to conduct a more comprehensive analysis, and put forward a proposal for intra-city clothing distribution under the O2O model on this basis. Finally, this research focuses on the proposal of intra-city clothing distribution under the O2O model, and mainly develops a relatively new shopping mode and distribution environment from five aspects of supply side, sales side, distribution side, technology side, and service side; while conforming to the O2O trend, it will also achieve healthy economic development. Keywords: O2O model distribution
Intra-city distribution Intra-city clothing
1 Introduction With the rapid development of the Internet, online shopping has become an important consumption channel and is deeply loved by the public. The scope of online shopping is all-encompassing, from daily necessities to furniture and electronics. Clothing, the top-ranked in clothing, food, housing and transportation, has huge commercial value behind it. According to data from China Industry Information Network, the scale of China’s online clothing market has grown from 52.595 billion yuan to 559.07 billion Yuan in the five years since 2011; and the number of monthly active users on Taobao apparel platform reached 380 million in 2018, Jingdong and Vipshop rank second and third [1]. At the same time, the introduction of relevant national support policy vigorously encourages the development of new e-commerce formats, and motivates e-commerce technology research and development and promotion and application. Overall, China has been adopting various measures and policies to promote the development of e-commerce in the apparel industry in recent years. Among them, compared with other © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 547–554, 2021. https://doi.org/10.1007/978-981-33-4572-0_79
548
T. Zhang et al.
e-commerce models, biggest feature of O2O is its emphasis on user experience, and its current development prospects are promising, called “O2O model will be the new blue ocean of apparel business marketing” by relevant professionals [2]. With the rapid development of the O2O model, the scale of intra-city delivery brought by it is also huge; and as a localized e-commerce model, the end of distribution of the O2O model is intra-city distribution. Coupled with the continuous improvement of mass consumption level, offline distribution services are put forward many higher requirements such as on-time delivery, high-quality clothing, intact packaging, and other services. Therefore, when the inherent model can no longer meet the needs of consumers, the new exploration of the clothing distribution model in the same city under the O2O model has become a market need. In many related practical studies, the content of intra-city clothing distribution under the O2O model is relatively scarce, and it needs to be actively studied to propose practical solutions and development strategies.
2 Under the O2O Model, Clothing Delivery in the Same City 2.1
The Current Situation of Intra-city Distribution Under the O2O Model
Intra-city distribution is the result of continuous subdivision and development of logistics distribution. The intra-city distribution can provide customers with integrated logistics services such as warehousing, sorting, transportation and information tracking, so as to realize the transfer of supplier’s goods to retailers or customers [3]. Intra-city distribution is generally called the “last mile” of logistics, which solves the most complex “last mile” problem in logistics [4]. The intra-city distribution of clothing under the O2O model has begun to show its brilliance, and the mainstream is gradually transformed from the B2C model of clothing e-commerce distribution to the same city distribution under the O2O model. The difference between B2C and O2O apparel distribution in the same city lies in the offline mode. The intra-city distribution of clothing under the O2O model is mainly for the “last thousand meters” distribution, and the biggest difference between O2O and B2C clothing e-commerce is that O2O has more offline experience stores. Based on the difference of offline experience stores, the same-city distribution of clothing under the O2O model can be divided into two categories. The first category is the offline franchise model. The most representative one is the “Cat House” in Shenzhen, which aims to create a life service circle within half a kilometer, and merchants such as clothing stores and convenience stores can join [5]. For example, residents can go to clothing stores to try on and buy clothing, scan the QR code in the store to place an order and pay, and then within the agreed time limit, the clothing of their choice will be delivered to the door. And in addition to providing sales channels, stores also undertake corresponding after-sales services. The second type is offline self-built mode, taking SF Express “Heike” as an example. SF “Heike” is a store with no inventory. Customers need to scan the QR code in the store for online shopping, including clothing, fresh food and other products. All
Research on the Development Strategy of Intra-city Clothing Distribution
549
products can be paid on delivery, which also guarantees the quality of products and after-sales service. Although “Heike” does not have a physical display function and there are still major problems as an experience store under the O2O model, it also reflects the idea of this new model of O2O to a certain extent. 2.2
The Characteristics of Intra-city Distribution Under the O2O Model
On the whole, the intra-city distribution of clothing under the O2O model mainly has the characteristics of interoperability, activity and innovation. The content of interoperability includes any clothing e-commerce, that is, online and offline can transform to O2O, and provide clothing delivery services in the same city. For example, physical clothing stores can use online propagandizing and popularizing or event discounts to attract customers to offline and carry out experience activities such as actual clothing wear; aiming to improve service quality and shopping experience, clothing e-commerce can transform to the O2O model by offline joining or self-built, which can deliver the clothing to the designated customer’s home within the specified time after the customer places an order and pays. Flexibility mainly refers to the flexibility of delivery and after-sales service after the customer places an order and pays. Consumers can choose the clothing delivery method according to their own situation, that is, self-pickup or delivery. The delivery service can also adjust the delivery time according to customer requirements or pick it up at the designated site. For example, Uniqlo and SF Express are combined with each other, and SF Express will deliver it to your door within one hour when the order is placed by the Uniqlo applet; in the same way, after-sales service can choose the appropriate method according to your own situation. The innovation is mainly manifested in that the intra-city distribution of clothing under the O2O model is not only the establishment of the current O2O model, but also needs to seek new breakthroughs [6]. In terms of intra-city distribution, unlike other logistics models, intra-city apparel distribution is mainly for the “last thousand meters”. It is necessary to constantly update information and improve distribution system since urban community transportation network is complex and apparel is scattered as small items; in terms of management, taking Suning’s O2O model as an example, Suning has realized the systematic supervision of chain operations and supply chain, avoiding the risk of numerous stores and poor management under the O2O model. 2.3
The Shortage of Intra-city Distribution Under the O2O Model
At the same time, there are also certain problems in the distribution of clothing in the same city under the O2O model. The problem of distribution is mainly time and distribution cost. The longer delivery time, that the clothing cannot be delivered within the agreed time, will lead to loss of customers in the long run; since intra-city distribution is in the initial stage, the distribution centers in the city are limited, and the number of daily orders is not large, higher delivery costs not only increase the delivery distance, but also increase the cost of clothing itself. Clothing problems refer to clothing quality problems and the number of customer orders. Although consumers place an order after fitting in a clothing store, there are still
550
T. Zhang et al.
problems that the actual received clothing does not match the store, or after wearing or washing, which is caused by insufficient pre-regulation of the model and chaotic store management; at the same time, shopping online for consumers has become a habit, coupled with the fact that there are fewer types of clothing in the early stage, resulting in fewer orders and higher costs for developing customers.
3 The Proposal of Clothing City Distribution Under O2O Model 3.1
Supply Side
First, contract a garment factory. Through the analysis of intra-city clothing distribution under O2O mode, it can be seen that the intra-city distribution supply side of clothing under O2O mode is similar to that of B2C mode. The main part of the clothing of both comes from clothing factories and designer studios. Clothing is different from fresh products for a certain storage period. Merchants can sign relevant agreements with garment factories or studios to obtain lower purchase unit prices through purchase large quantities of goods, then storage and sale. Second, online store transformation. In the O2O mode, an important part of supply side of the intra-city clothing distribution is the physical clothing store which is used to try on and experience, order online; in addition, online clothing stores have been saturated in recent years, with higher competitive pressure and more rough living conditions. In response to this situation, online clothing stores can transform to an O2O model to realize online and offline integration, expand sales channels, conduct offline intra-city distribution, and enhance their competitiveness; this reflects features of intercommunity of intra-city apparel under O2O mode. 3.2
Sales Side
Increase the number of physical clothing stores. Online shopping has become a common way for people to shop, but it also shows its limitations and highlights the unique experience functions of physical stores. With the increasing demand of consumers for “experience consumption”, the experiential shopping model has been praised by the market. The function of physical clothing stores is not only limited to providing customers with their favorite products, but also embodied in providing customers with a high-quality experience. In order to provide consumers with a comfortable shopping environment and enable offline clothing physical stores to achieve core sales functions, the number of clothing entities stores should be continuously increased. With the increase in the number of stores, physical stores can also carry out promotions and activities of new product launches and major holidays, which can not only meet the needs of different customers for clothing, but also meet the consumer experience of current customers. Second, expand the sales channels of online shopping platforms. Online shopping platforms can be divided into self-built sales platforms and third-party shopping website platforms. The first model is a self-built sales platform, whose main function is
Research on the Development Strategy of Intra-city Clothing Distribution
551
to reflect the unique characteristics of clothing delivery in the same city under the O2O model. The content of the web page should be comprehensive and detailed, as far as possible to cover the needs of users; the overall style of the self-built sales platform should also be combined with the style of the physical store. Second, the reason for third-party platform cooperation is mainly that third-party platforms can provide professional services, reduce website construction costs, reduce system maintenance and upgrades, and provide a safe trading environment. At the same time, the e-commerce platform is now mature and has fixed users and traffic. Through the e-commerce platform, network marketing can be realized, which has the characteristics of strong pertinence, large traffic, comprehensive information, and strong marketing effects [7]. In addition, it is necessary to expand mobile Internet sales channels. The Internet sales channels for intra-city distribution of clothing under the O2O model are mainly for mobile Internet customers such as mobile phones and tablets, mainly through thirdparty payment. Third-party payment is already a more mature online payment method. After continuous improvement, its security and convenience are improving, and it has gradually become the mainstream payment method for consumption. 3.3
Distribution Side
The store distribution model is an important way to realize the same city distribution of clothing under the O2O model, and its distribution method best reflects the flexibility of the same city distribution of clothing under the O2O model. When customers shop online, they automatically choose the clothing store closest to the consumer. After the consumer places an order and pays online, the same city limited time door-to-door service can be realized; at the same time, since this way of same-city delivery has a certain degree of flexibility, consumers can choose self-pickup or delivery according to their own conditions. Similarly, the appropriate method of after-sales service can be chosen according to your own situation. Secondly, the e-commerce platform distribution model is also one of the important models of the distribution end. Due to the limited logistics distribution centers in each city and the wide variety of clothing products, it is possible to cooperate through ecommerce platforms such as Cainiao Warehousing and JD mall, so that some clothing can be added to the storage system far away from the physical store or the distribution center and the limited time clothing distribution can be realized in the same city; in the period of large orders, it can also cooperate with professional takeaway platforms such as Ele.me and Meituan to realize order delivery and reduce customer loss. In addition, there are distribution models of logistics companies that mainly target the supply side. Because clothing factories and designer studios are located in various places, clothing factories are mainly concentrated in Jiangsu, Zhejiang, Shanghai and the Pearl River Delta, and China has a vast territory, many clothing stores, and a large demand for clothing, the premise for regional stores to meet the needs of customers is mature logistics distribution on the supply side; the supply end and the sales end should realize information exchange to understand the needs of consumers across the country and provide targeted delivery services.
552
T. Zhang et al.
3.4
Technical Side
The establishment of the data system is the basic project of the O2O business model [8]. With the help of the data system, the effective integration of online and offline information and data such as product, inventory, capital flow, supply chain, logistics, customer information and market demand can be realized [9]. Offline clothing stores can realize electronic marketing and electronic payment, which is also convenient for consumers to get feedback on orders and information, and realize online and offline information synchronization. Secondly, using mobile Internet technology to push relevant promotion activities on mobile platforms, such as issuing offline store vouchers in Weibo, We Chat official accounts, and mini programs, achieves online to offline drainage; because the mobile platform is not restricted by external conditions, clothing stores and online sales staff can process orders in time to improve work efficiency. The advantage of the O2O model lies in the customer experience, so the establishment of a membership management system is particularly important. Member rights must be shared online and offline, and at the same time, online and offline member information sharing should be realized to ensure the basic rights of members. After joining the membership through the “zero threshold” of ordinary members, membership levels can be set based on customer consumption, evaluation, sharing, etc., and corresponding membership rights will be granted. The establishment of a membership system can effectively improve consumer loyalty, enhance customer experience, and facilitate understanding of customer-related information (such as shopping preferences, consumption status, etc.), which is conducive to accurate sales and long-term store development. 3.5
Server-Side
The same-city distribution of clothing under the O2O model must establish a combination of online and offline services to create a comprehensive and high-quality experience space for consumers with a brand-new service concept [10]. In the same city distribution of clothing under O2O mode, offline physical stores are the “main battlefield”. Therefore, while satisfying the basic services, it is necessary to enhance the customer experience. As the model continues to mature, a large amount of investment is needed to train store sales personnel to master many skills such as familiarity with customer information, effective management of products, and background activity push. One of the advantages of intra-city clothing distribution under the O2O model is the interoperability between online and offline. Therefore, at the service level, online and offline collaboration should also be realized. For example, online experience is mostly based on video, pictures, text and other forms. Although it is not as direct as offline physical stores, it has a wide coverage, convenient information acquisition, and it is not restricted by external conditions, which can often achieve low cost and high efficiency. At the same time, through online event promotion, offline clothing display and experience, a two-in-one service platform has been created to enhance consumers’ shopping experience and fan stickiness.
Research on the Development Strategy of Intra-city Clothing Distribution
553
4 Conclusion The development of e-commerce is becoming more mature, and the O2O model is the general trend. Considering that the relatively mature research results on the O2O model are mostly at the theoretical level, and China’s attempts under the O2O model are mostly in the exploratory stage, there are no typical cases and studies on the clothing distribution in the same city in this article. Therefore, the clothing distribution in the same city under the O2O model still has opportunities accompanied by challenges. This paper uses the research process of the O2O model at home and abroad, the O2O model and the related concepts and analysis of intra-city delivery under the O2O model as the theoretical foundation. By giving examples of the application scenarios of different categories under the existing O2O model, this article summarize the advantages and characteristics of intra-city delivery under the O2O model; through the analysis of the cases of Shenzhen “Cat House”, SF “Heike”, Uniqlo, etc., the current situation, characteristics and shortcomings of clothing distribution in the same city under the O2O model are obtained. Finally, focusing on the five aspects of clothing supply, sales, distribution, technology and service, we systematically put forward proposals for intra-city clothing distribution under the O2O model, and concludes that this model can organically integrate clothing physical stores with online stores and channels, to further develop the clothing store, enhance its own competitiveness, realize the sustainable growth of clothing store sales, and realize its long-term development. The 21st century is a period of rapid development of the Internet, and the O2O model has emerged. The intra-city distribution of clothing under the O2O model meets the development of the times and the needs of the market. We should continue to leverage the advantages of the online and offline platforms to achieve one plus one greater than two benefits. At the same time, we must constantly adjust and optimize the business model according to market changes and our own development to achieve long-term development.
References 1. 2017–2022 China Apparel E-commerce Industry Development Prospects and Investment Strategy Research Report. http://www.chyxx.com/industry/201708/551769.html 2. Yongyi, W.: O2O mode: the new blue ocean of traditional apparel enterprise marketing. China Ind. News 11, 1 (2012) 3. Zhang, J.: Principles and Methods of Logistics Planning. Southwest Jiaotong University Press, Chengdu (2011) 4. Zhai, H.: Research on the optimization of cold chain logistics distribution in the same city for fresh O2O e-commerce. Master’s degree thesis, Dalian Maritime University (2015) 5. Yu, L.: Ali promotes cat house, JD.com cooperates, and SF express also opens stores-the “last mile” e-commerce new battlefield. Shenzhen Commercial Daily (2014) 6. Si, F.: Research on O2O marketing model of SY baby clothing company. Master’s Degree thesis, Shandong University of Technology (2019) 7. Wang, Q.: Research on O2O business model of traditional retailers–taking YZ group as an example. Master’s thesis, Shandong University of Finance and Economics (2016) 8. Li, C.: Four O2O models. Manager 6, 28 (2014)
554
T. Zhang et al.
9. Chi, L.: Talking about the opportunities and challenges faced by the O2O model of ecommerce. Bus. Times, 63 (2014) 10. Wei, L.: Business model innovation of the apparel industry from the perspective of O2O. J. Hubei Univ. Sci. Technol. 10, 2 (2014)
Construction of Budget Evaluation Index System for Application-Oriented Undergraduate Universities Based on Artificial Intelligence Dahua Wang and Guohua Song(&) College of Business Administration, Jilin Engineering Normal University, Changchun, Jilin, China [email protected]
Abstract. With the deepening of the reform of higher education management system, more and more attention has been paid to the benefits of running a school, and the economic benefits have become increasingly prominent. However, there are also many problems in resource management and utilization in colleges and universities, such as the mismatch between capital input and output, the inflation of student training cost, the high debt ratio, and the low efficiency of capital utilization. In this case, effective measures must be taken to improve the use efficiency of the existing funds in order to ensure the smooth progress of the work and the completion of the locked targets. Therefore, the budget evaluation system emphasizing the effectiveness of funds begins to attract the attention of colleges and universities. Based on artificial intelligence, this paper constructs a budget evaluation index system for application-oriented undergraduate universities. Firstly, the thesis gives a theoretical overview of the performance budget evaluation system and a brief introduction to the artificial intelligence technology. Secondly, the construction of budget evaluation system of application-oriented undergraduate universities; finally, it puts forward the safeguard measures of applying the budget evaluation system. Budget management is an important work of financial management in colleges and universities, which is of certain significance to the improvement of management level and financial status. Keywords: Application-oriented undergraduate colleges Performance evaluation The indicator system
Budget
1 Introduction In recent years, China’s institutions of higher learning continue to expand the scale of the overall school, financial income and expenditure increased significantly, ushered in an unprecedented rapid development. The country attaches great importance to the development of colleges and universities, and the corresponding investment is expanding, but the resources are limited after all, so the expenditure efficiency is still not very optimistic. Therefore, how to increase the promotion of universities to optimize the allocation of resources has become a hot topic in academia. In order to © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 555–562, 2021. https://doi.org/10.1007/978-981-33-4572-0_80
556
D. Wang and G. Song
alleviate the problem of the unreasonable allocation, performance evaluation oriented allocation mechanism was introduced into our country first, preliminary established the system of university budget allocations, after introducing the national medium and long-term program for education reform and development, institutions of higher learning should be proposed budget management performance evaluation, implement dynamic management point of view. Under the situation that the government turns from capital investment to capital use performance, the implementation of performance evaluation on budget implementation and the formation of budget management system based on budget, goal oriented and performance centered are important forces to promote the rapid and comprehensive development of colleges and universities. As a key link of budget management, it is necessary to evaluate the budget implementation of application-oriented universities in China [1]. In this complete system, the establishment of the performance evaluation index system for budget implementation is conducive to enriching the consideration of the economic results of budget management [2]. Budget performance evaluation index system emphasizes on the budget implementation of performance evaluation in the important position of budget management, and help to clarify the responsibilities of related staff applied undergraduate colleges, improve the allocation of resources, enhance the efficiency and use result of budget, strengthen the budget management, to further strengthen the financial management level finally [3]. Based on artificial intelligence, this paper constructs a budget evaluation index system for application-oriented undergraduate universities. Firstly, this paper gives a theoretical overview of the performance budget evaluation system of colleges and universities. Secondly, it constructs the budget evaluation system of applicationoriented undergraduate universities, including the determination of evaluation principles and standards, the selection of evaluation indicators, and the selection of evaluation methods. Finally, the guarantee measures for the application of the budget evaluation system are summarized.
2 Method 2.1
Artificial Intelligence
Artificial intelligence (AI) is a technology that has important influence on both society and economy, and it is also a comprehensive subject in the rising period [4]. At present, people do not have an accurate understanding of the nature or mechanism of artificial intelligence (such as hearing, vision, knowledge expression, etc.). The dictionary of artificial intelligence defines it as follows: making a computer system simulate the intelligent activities of human beings and completing tasks that can only be accomplished by human intelligence is called artificial intelligence, but it cannot be used to interpret the connotation of artificial intelligence. So far, there is no proper way to test the functional parameters of machines. However, some scholars are focusing on the development of artificial intelligence, and some achievements have been made in this regard, which has been popularized and applied to many fields [5, 6].
Construction of Budget Evaluation Index System for Application-Oriented
2.2
557
Principles of Constructing Budget Evaluation Index System
The basic principles of constructing the budget evaluation index system of applicationoriented undergraduate universities mainly include the following: first, the principle of overall optimization [7]. When designing the evaluation index system, it is necessary to analyze the actual development of colleges and universities in a comprehensive way. Second, scientific principle. That is, when setting evaluation indexes, we should combine the characteristics and rules of budget performance evaluation to achieve the unity of objective reality and subjective goals. Third, the principle of comparability. Refers to the design of evaluation indicators must have a consistent collection caliber and collection method, to ensure that the data can be compared and analyzed [8]. Fourth, the principle of operability. Fifth, the principle of dynamics. It means that the overall change law should be fully considered when designing the evaluation index so as not to affect the validity of the evaluation result [9]. 2.3
The Construction Method of Budget Evaluation Index System
(1) Balanced Scorecard System. The balanced scorecard method is to achieve such a strategic goal process, which includes performance appraisal, performance improvement, strategy implementation and strategy revision [10]. The causal relationship of factor interaction is used to reproduce the strategic trajectory of an organization and the management of nonfinancial indicators is emphasized instead of the traditional financial evaluation system. Its approach is to break down the whole strategic objective into a coherent network. Balanced scorecard has the advantage of being comprehensive, promoting the innovation of budget evaluation system, and being more in line with the reality of colleges and universities. Its shortcomings lie in the implementation of high requirements, such as the collection of non-financial indicators, scientific quantification of important indicators, high requirements for implementation, high implementation costs. (2) Key Performance Indicator System. As can be seen from its literal meaning, the key performance indicator system focuses on the influence of key variables on the development of the organization. It is based on identifying key variables. Key variables, or kpis, should be set in accordance with the following five principles: clarity, measurability, feasibility, timeliness, and reality. KPI is simply a method to simplify the overall assessment of performance evaluation to the assessment of a few key indicators, and then compare employee performance with the selected key indicators to draw the evaluation conclusion. It helps enterprises to realize the consistency of organizational and individual interests and finally achieve their goals. The downside is that it’s easy to turn performance appraisal into a very mechanical system.
558
D. Wang and G. Song
3 Experiment 3.1
Selection of Budget Evaluation Indicators
University budget performance index system of the BSC and KPI can be used for construction, this article from the actual conditions of colleges and universities, follow the principle of two kinds of the use of performance management tools, and refer to the related research results in recent years, our country strategic eye row of applied undergraduate colleges can be divided into: the financial dimension, the customer dimension, internal process dimension and learning and growth dimension. First, the financial dimension. The main goal of the operation and management of colleges and universities is not to make profits, but to develop education and cultivate talents needed by the society. From the perspective of financial management, three representative key indicators such as the completion rate of budgetary expenditure, the completion rate of income and the ratio of income to expenditure should be added. Second, the customer dimension. The strategic goal of universities is to meet customer expectations and create more value. From the perspective of customer dimension, the indicators involved include: ratio of students to teachers, per student book, employment rate of graduates, and compliance rate of national students’ physical health standard. Third, internal process dimensions. In colleges and universities internal operation dimensions of the management process, involved the content mainly includes scientific research, management and other work, so including index content include: staff per capita funding per capita scientific research funds, teachers, staff spending rate, spending per student career as well as the number of provincial and national key disciplines. Fourth, the learning and growth dimension. In the era of new knowledge economy, learning and development are the basic requirements for the long-term and rapid development of organizations, and universities are no exception. Relevant indicators include: ratio of teacher training costs, annual growth rate of fixed assets and growth rate of income. 3.2
The Method of Determining the Weight of Budget Evaluation Index
Analytic hierarchy process will distinguish between link integrity problem to distinguish levels of division, to simplify the complex issues, use to build a set of objective thinking have science model analysis of the principle of analytical method, the factors affecting the evaluation result of concrete layers, hierarchical, points the importance of treatment, ordered according to certain rule to research and analysis of the corresponding results. In practical work, the process of budget evaluation is easily affected by some subjective factors, which has certain structural complexity and decisionmaking criteria. Therefore, this paper combines the analytic hierarchy process (ahp), and the specific process of weight assignment is shown in Fig. 1. First, a hierarchical model is established. According to the analytic hierarchy process (ahp), in the process of constructing the budget performance evaluation system of colleges and universities, we can divide the hierarchy into target layer, middle layer and plan layer. Secondly, the evaluation index judgment matrix is constructed. Each evaluation index has different importance in the whole evaluation system, so it is
Construction of Budget Evaluation Index System for Application-Oriented
Construct Evaluation Index Judgment Matrix
Build Hierarchical Model
559
Calculate The Weight of Evaluation Index
Fig. 1. The determination of budget evaluation index weight
necessary to judge the importance of different indicators. In this paper, a correlation matrix is constructed to quantify the importance of each indicator, and finally the weight coefficient of each indicator is obtained. Finally, the evaluation index weight is calculated. According to the judgment of a financial expert in a university, the corresponding model formula is established, the weight is sorted according to the content of different levels, and the specific value is calculated by combining the maximum characteristic root of the judgment matrix and the characteristic vector.
4 Discuss In combination with the chart data, in 2013, for evaluation of the base period (100 points), the score of a university in 2014 and 2015, the budget were 104.97, 113.82, compared to that in 2013, a university in 2014 and 2015, the budget performance level enhances unceasingly, but also there are some problems at the same time, the specific data are shown in Table 1 below, a detailed analysis of level and category is shown in Fig. 2. Table 1. Application of budget evaluation Level Year 2013 Year 2014 Year 2015 Financial Level 116.00% 108.37% 110.36% Customer Level 94.42% 94.27% 93.22% Internal Operation Level 32.18% 48.17% 49.84% Learning And Growth Level 4.42% 6.13% 11.62%
4.1
Financial Analysis
The completion rate of budget revenue in 2014 and 2015 is both greater than 100%, indicating that a university has a good ability to realize budget revenue. The budget expenditure rate is controlled at around 104%, indicating that a university can realize and control expenditure well. The budget of income and expenditure is reasonable and feasible. The income and expenditure ratios of 2014 and 2015 were 107.67% and 106.11%, respectively, indicating that the income of the school can meet the needs of
560
D. Wang and G. Song
Financial Level
20.00%
20.00% Customer Level
33.33%
26.67%
Internal Operation Level Learning And Growth Level
Fig. 2. Stratification of budget evaluation
expenditure, and the school has abundant funds, providing solid financial guarantee for teaching, scientific research, infrastructure construction and special construction. 4.2
Customer Level Analysis
In 2013–2015 are of an applied university student/teacher ratio range value in by the Ministry of Education student/teacher ratio, but the index is rising year by year and more and more close to the standard values, shows that the school needs to be further introduced more and more colleges and universities teacher resources, to pay more attention to study interaction between teachers and students, and pay attention to care for the cultivation of the students and teachers, in order to satisfy the demands of highspeed expansion of school running. The employment rate of graduates in 2014 and 2015 remains above 95%, and the compliance rate of the national standards for students’ physical health is around 94%. In the overall employment environment, the university graduates still have their own advantages. 4.3
Internal Operation Level Analysis
In 2014 and 2015, the per student business expenditure decreased compared with that in 2013, indicating that the per student training cost of this school is decreasing. However, in order to pursue higher education level, it is suggested to increase the expenditure on student training. In 2014 and 2015, the per capita scientific research funds of teachers increased by a large margin compared with the base year, and their total scores were 13.15 and 14.42 respectively, which is closely related to the further deepening of the reform of the scientific research management system and operation mechanism of the school, and the vigorous improvement of the scientific research level and innovation ability of the school. In 2014 and 2015, the per capita expenditure of faculty and staff was higher than that of 2013, indicating that the university has steadily improved the career development ability of faculty and staff, the management and operation level and operation efficiency of the university.
Construction of Budget Evaluation Index System for Application-Oriented
4.4
561
Learning and Growth Level Analysis
The annual revenue growth rates of 2014 and 2015 were 9.36 and 17.75, which showed a substantial increase compared with 6.75 in 2013, indicating that the steady growth of income of the school is the source power for the development of all aspects of the school. In addition to actively strive for financial allocation, it is suggested to enrich the sources of funding channels. Growth rate of fixed assets and the teacher training rate declined, Suggestions on the actual total funding limited condition, resolutely increase in its fixed assets, teacher training and experimental training aspects of investment scale, pay more attention to strengthen the teachers troop construction, make full use of teachers’ professional ability raise to guarantee a higher quality of teaching and research.
5 Conclusion In terms of the budget performance evaluation of application-oriented undergraduate universities, there are still corresponding loopholes and shortcomings, such as the lack of emphasis on work, imperfect legal guarantee system, non-standard and unscientific system construction, lack of corresponding supervision and management mechanism and so on. By using balanced scorecard and key performance indicators, a set of budget performance indicators evaluation system suitable for colleges and universities is established. By using the constructed index system and combining with the actual situation of the actual university, this paper makes a case analysis, and finds that the budget performance level of a university has been improved in recent three years, but there are also some problems. We should further strengthen the publicity of budget performance management, establish the feedback mechanism of budget performance evaluation and the monitoring mechanism of budget performance, and strengthen the training of budget performance management personnel. Acknowledgements. This work was supported by Jilin Association for Higher Education (Grant No. JGJX2019D211).
References 1. Jiang, F., Jiang, Y., Zhi, H.: Artificial intelligence in healthcare: past, present and future . Stroke Vasc. Neurol. 2(4), 230 (2017) 2. De Raedt, L., Kersting, K., Natarajan, S.: Statistical relational artificial intelligence: logic, probability, and computation. Synth. Lect. Artif. Intell. Mach. Learn. 10(2):1-189 (2016) 3. Price, S., Flach, P.A.: Computational support for academic peer review: a perspective from artificial intelligence. Commun. ACM 60(3), 70–79 (2017) 4. Johnson, K.W., Soto, J.T., Glicksberg, B.S.: Artificial intelligence in cardiology. J. Am. Coll. Cardiol. 71(23), 2668–2679 (2018) 5. Cath, C., Wachter, S., Mittelstadt, B.: Artificial Intelligence and the “Good Society”: the US, EU, and UK approach. Sci. Eng. Ethics 24(7625), 1–24 (2017)
562
D. Wang and G. Song
6. Hashimoto, D.A., Rosman, G., Rus, D.: Artificial intelligence in surgery: promises and perils. Ann. Surg. 268(1), 1 (2018) 7. Qiu, J.: Research and development of artificial intelligence in China . National Sci. Rev. 3(4), 538–541 (2016) 8. Feng, Yu., Cui, N., Zhang, Q.: Comparison of artificial intelligence and empirical models for estimation of daily diffuse solar radiation in North China Plain. Int. J. Hydrogen Energy 42(21), 14418–14428 (2017) 9. Jeganathan, J., Knio, Z., Amador, Y.: Artificial intelligence in mitral valve analysis. Ann. Cardiac Anaesthesia 20(2), 129 (2017) 10. Yeung, S., Downing, N.L., Fei-Fei, L.: Bedside computer vision - moving artificial intelligence from driver assistance to patient safety. New England J. Med. 378(14), 1271– 1273 (2018)
Smart City Evaluation Index System: Based on AHP Method Fang Du1,2, Linghua Zhang1,2(&), and Fei Du3 1
2 3
School of International Studies, Sichuan University, Chengdu, China [email protected] School of Public Administration, Sichuan University, Chengdu, China Department of Law and Political Science, North China Electric Power University, Hebei, China
Abstract. In recent years, with the rapid expansion of urbanization, urban diseases are getting worse. Smart city as a sustainable development solution, has played an increasingly important role. Base on the existing smart city evaluation system, this study has constructed an evaluation index system with 5 dimensions and 14 indicators. The AHP method was introduced to calculate the weight of indicators and their ranking. The results showed that among the five dimensions, smart infrastructure is the most important, followed by smart public services, smart public management, and smart industry economy and finally is the smart security system. This smart city evaluation index system provides a theoretical reference and practical basis for measuring the development level of smart city. Keywords: Evaluation index system Smart city Analytic hierarchy process
1 Introduction As the urbanization process accelerates, urban diseases such as population expansion, traffic congestion, housing shortages, environmental degradation, disorder, and public safety have become more and more serious, leading to chaotic urban management. To solve the “urban disease”, the “smart city” solution was introduced. Smart cities originated from the “Smart Earth” strategy proposed by IBM in 2009. The strategy proposes to make the earth smarter; an “instrumented, interconnected, and intelligent” world should be established [1]. In 2010, the concept of smart cities was introduced into China. In 2012, pilot work on smart cities began. In 2014, the State Council clearly promoted the construction of smart cities. In 2017, more than 500 smart cities were built [2]. Subsequently, smart cities entered a rapid development trend. At present, although the development speed of smart city is fast, urbanization promotes the development of industrialized society, they still face many challenges. The evaluation index system can guide the sustainable development of smart cities and truly promote the “smart” development of smart cities. With reference to the existing indicator system, this study analyzes and constructs a smart city evaluation index system, then use the AHP method to assign indicators weights and prioritize them to provide a theoretical basis and practical reference for domestic smart city evaluation. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 563–569, 2021. https://doi.org/10.1007/978-981-33-4572-0_81
564
F. Du et al.
2 Overview of the Existing Evaluation System About Smart City The assessment about smart city subject to multiple factors. Different regions, countries, institutions, and scholars choose different indicators. The existing evaluation index systems include “American Smart City Forum (ICF) Evaluation System, EU Smart City Evaluation Index System, National Development and Reform Commission Evaluation Index System, Shanghai Pudong New Area Smart City Evaluation System, Guomai Interconnection Evaluation System”, etc. The evaluation index systems have their own characteristics and play an important role [3].
3 Construction of Smart City Index System This article takes Wang Zhengyuan and Chen Jinn’s evaluation index system on smart cities as a reference. [4, 5] taking “smart city evaluation” as the target layer, and focuses on “smart infrastructure (A1), smart public management (A2), smart public services (A3), Smart Security System (A4) and Smart Industry Economy (A5)” are Criterion dimension indicators and 14 Scheme layer indicators, as shown in the table below (Table 1). Table 1. Evaluation index system Target layer Criterion dimension Smart city (A) Smart infrastructure (A1) Smart
Smart Smart
Smart
Scheme indicators Broadband network level (B1) Perceive network level (B2) public management (A2) E-government construction (B3) City safety construction (B4) Smart environmental construction (B5) Smart transportation construction (B6) public services (A3) Smart medical construction (B7) Smart education construction (B8) security system (A4) Rule support (B9) Talent support (B10) Financial support (B11) industry economy (A5) Smart application technology (B12) Smart equipment manufacturing (B13) Cloud computing industry (B14)
Smart City Evaluation Index System: Based on AHP Method
565
4 AHP Method: Index Weight Assignment and Prioritization AHP method is introduced to calculate the weights of the evaluation index system [6]. By constructing hierarchical analysis structure, using expert scores to generate a judgment matrix, checking consistency, calculating their weights and performing a single-level ordering and a total-level ordering. 4.1
Constructing Hierarchical Analysis Structure
Constructing a good hierarchical analysis structure is extremely important for problem solving [7]. It determines the effectiveness of the analysis results. The hierarchical structure shown in the figure below (Fig. 1).
Fig. 1. Hierarchical analysis structure
4.2
Construct a Judgment Matrix
In this study, A is taken as an example to construct the judgment matrix. Firstly, invite experienced experts to score according to the relative importance of the five dimensions {A1, A5}. The relative importance is based on T. Star’s 1–9 ratio scale two factors are compared with each other, and the relative importance is assigned by specific values. The comparison value is shown below (Table 2). According to the above table, the judgment matrix is derived as shown below, which can be represented by A (Table 3),
566
F. Du et al. Table 2. Judgment matrix 1–9 ratio scale and its meaning
Table 3. A matrix
4.3
Consistency Test of Judgment Matrix
Taking A matrix as an example, describing weight assignment and prioritization of A1 (smart infrastructure), A2 (smart public management), A3 (smart public services), A4 (Smart Security System) and A5 (Smart Industry Economy). of the judgment matrix a) Calculate the nth root x ¼ x
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n Pnj ¼ 1aij
i ¼ 1; 2; . . .n
ð1Þ
Smart City Evaluation Index System: Based on AHP Method
567
T calculate ¼ Y ð Get w wÞ1 ; ð wÞ2 . . .ð wÞn normalize x, Xn j¼1
j j ¼ 1; 2; . . .n w
ð2Þ
¼ Yx1 ;x2;...xn Y T , that is the approximate value of the feature vector. Get Y x b) Compute kmax (maximum eigenvalue), kmax ¼
X n ðB w Þi i¼1 n x i
ð3Þ
c) Check consistency, which is conducted to judge the satisfaction consistency of each factor, C:R: ¼ C:I:=R:I:
ð4Þ
The formula is used to check the consistency of decision-makers’ judgment and thinking. The larger the CI value, the greater the degree of deviation of the judgment matrix from complete consistency; the smaller the CI value (close to 0), the better the consistency of the judgment matrix [8]. For different judgment matrices, the consistent error requirements are different, and the C.I. value requirements are also different. To measure whether different matrices have satisfactory consistency, it is also necessary to introduce the average random consistency index RI value, the detailed values are shown in the table below. In this study, there are 5 dimensions, so the R.I. value is 1.12 (Table 4).
Table 4. Order-R.I.value
4.4
Hierarchical Single Sort and Hierarchical Total Sort
According to the above calculation process, we can get the relative weight of A1-A5 in matrix A and get the priority according to the weight. Where C.R. < 0.1, it can be determined to satisfactory consistency. The whole weights and order are as follows (Table 5).
568
F. Du et al. Table 5. Weight assignment and ranking of smart city evaluation index system
(Note:
max =5.068; CI=0.017; R.I.=1.12; C.R.=0.015)
According to the above table, there are 5 dimensions, so the R. I. = 1.12, get k max = 5.068, C.R. = 0.015 < 0.1, demonstrates that matrix has satisfactory consistency. In summary, smart infrastructure (A1) acquires the highest weight, which is 0.417, ranking first. It can be deduced that in the process of smart city evaluation, smart infrastructure is particularly important, followed by other dimensions [9]. This also proves that in the construction of smart cities, infrastructure construction is crucial. Smart infrastructure will be the key area of smart city construction in the future years. Smart public services (A3), get weight of 0.263, ranking second. In recent years, smart public services have used smart platforms to promote the development of informatization and effectively promote the sharing of urban public resources throughout the city. Smart public services promote the coordinated and efficient operation of urban human flow, logistics, information flow, and capital flow [10]. Smart public management (A2) ranking third, with a weight of 0.160. The purpose of smart public management is to improve the level of public management. It mainly focuses on the management of municipal infrastructure construction, response to major emergencies, environmental monitoring, urban intelligent transportation, and public safety. Eventually achieve intelligent and efficient. Smart Industry Economy (A5) ranking fourth, with a weight of 0.098. Smart industry is an important force for future urban development. In recent years, the smart industry has actively explored development paths. Give full play to the role of the main body of the enterprise, promote the rapid and healthy development of the smart industry and provide strong support for the sustainable construction of smart cities [11]. Final is the Smart Security System (A4), with a weight of 0.062. The smart Security system plays the bottom function. It mainly includes policy system, technology system, standard system, industrial system and investment and financing system. These guarantee systems fully participate in and support the development of smart cities.
Smart City Evaluation Index System: Based on AHP Method
569
5 Conclusions This research draws on existing research literature to build a smart city evaluation index system, and then apply the AHP method to count weights and sort the whole indicators. It is worth noting that the construction about the smart city evaluation index system is a theoretical process. Attention should be paid in the actual operation: firstly, the evaluation index system should avoid excessive evaluation of hardware and technical facilities; secondly, enhance the comprehensiveness of the evaluation system and consider the evaluation of functions of urban characteristics. Thirdly, enhance the practicality of indicators, highlight exemplary cases; finally, balance the number of subjective and objective indicators and value the public’s experience and perception. Acknowledgments. Grant No.17AZD018. “The Principle of Chinese Frontier Science”. Grant No.SC19C022. “Theoretical and Empirical Research on Competency of the First Secretary of Poverty Alleviation in Sichuan Tibetan Areas from the Perspective of Rural Revitalization”.
References 1. Marino, C.A., Marufuzzaman, M.: Unsupervised learning for deploying smart charging public infrastructure for electric vehicles in sprawling cities. J. Cleaner Prod. 266(09), 7–13 (2020) 2. Cimmino, A., Pecerella Fantacci, R., et al.: The role of small cell technology in future smart city applications. Trans. Emerg. Telecommun. Technol. 25(1), 23–32 (2014) 3. Luckey, D., Fritz, H., Legatiuk, D.: Artificial intelligence techniques for smart city applications. In: Proceedings of the 18th International Conference on Computing in Civil and Building Engineering, ICCCBE, no. 02, pp. 45–54 (2020) 4. Quijano-Sanchez, L., Cantador, I., Cortes-Cediel, M.E.: Recommender system for smart cities. Inf. Syst. 92(09), 67–76 (2020) 5. Almirall, B.T., Wareham, E.J.: A smart city initiative: the case of Barcelona. J. Knowl. Econ. 4(2), 23–34 (2013) 6. He, Q.: AHP-based evaluation model and demonstration of smart city construction level. Stat. Decis. 35(19), 64–67 (2019). (in Chinese) 7. Chen, J., Yu, F., Pan, Y.: Research on the development and ranking of smart cities in China —based on the analysis of the national standards of “smart city evaluation model and basic evaluation index system” in 2017. Tsinghua Manag. Rev. Z1, 17–28 (2018). (in Chinese) 8. Qu, Y., Wang, Q.: The construction of the indicator system for measuring the development level of smart cities. Stat. Decis. 34(11), 33–36 (2018). (in Chinese) 9. Wenfeng, D.: Comparative analysis of smart city evaluation index system. Qual. Stand. 03, 46–48 (2018). (in Chinese) 10. Jiang, W., Wang, C., Qi, C.: Study on the construction and application of smart city functional risk assessment model. J. Inf. 37(01), 186–190 (2018). (in Chinese) 11. Yan, J., Liu, J.: The construction of a smart city evaluation system framework based on a self-organizing system. Macroecon. Res. 2018(01), 121–128 (2018). (in Chinese)
Feasibility Analysis of Electric Vehicle Promotion Based on Evolutionary Game Shiqi Huang(&) Department of Management Science and Engineering, Shanghai University, Shanghai, China [email protected]
Abstract. With the continuous progress of social production technology and the continuous increase of car ownership, environmental and energy issues have become increasingly prominent. Electric vehicles can significantly reduce carbon emissions and increase the use of clean energy, which is of great significance to the development of cities. As electric vehicles are a means of replacing traditional fuel vehicles, this paper analyzes the game process between fuel vehicles and electric vehicles after the introduction of electric vehicles based on evolutionary games. Based on the type of consumers choosing to buy a car, the characteristics of electric vehicles and fuel vehicles are analyzed, and the influence of the service level of electric vehicles due to scientific research input factors and government participation in management factors is considered to establish an evolutionary game between electric vehicles and fuel vehicles model. According to the relevant knowledge of dynamics, the parameters of the model are analyzed to obtain the conditions for the stability of each equilibrium point. By analyzing the steady state of system evolution, it provides theoretical support for the green development of electric vehicles and the formulation of related government policies. Keywords: Electric vehicle Fuel vehicle Evolutionary game Clean energy
1 Introduction As the society pays more attention to energy and environmental issues, replacing some fuel vehicles with electric vehicles has become a hot spot in the transportation field. After experiencing the industrial revolution, developed countries such as Europe and the United States paid attention to energy conservation and emission reduction earlier, and issued a series of policies to encourage the development of new energy vehicles. The domestic development of electric vehicles started late, and the relevant policies and regulations are not perfect. This paper uses the evolutionary game model to provide a certain theoretical basis for the promotion of electric vehicles and government policies. 1.1
Overview of New Energy Electric Vehicles
China's new energy vehicle production and sales volume exceeded 1.2 million in 2019. From the perspective of my country’s new energy vehicle sales structure, pure electric © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 570–577, 2021. https://doi.org/10.1007/978-981-33-4572-0_82
Feasibility Analysis of Electric Vehicle Promotion
571
passenger vehicles have the highest sales. In 2019, China’s pure electric passenger vehicles accounted for approximately 69.3% of China’s total sales of new energy vehicles; plug-in hybrid passenger vehicles and pure electric commercial vehicles accounted for 18.8% and 11.4%, respectively. The sales of powered commercial vehicles accounted for 0.5% [1]. In terms of geographical distribution, new energy vehicles have the highest sales in first-tier cities such as Beijing, Shanghai, Guangzhou, and Shenzhen, accounting for more than 30% of the total. This is related to the construction and distribution of new energy vehicle charging piles in first-tier cities. In China’s 2020 construction plan for charging stations and charging piles, East China and South China occupy the vast majority, and plans to build 7,400 charging stations and 2.5 million charging piles; in the central region, the northern region and the northeast region are 4,300 [2, 3]. Charging stations and 2.2 million charging stations, 100,000 charging stations for 400 charging stations in the western region. It can be seen that the sales distribution of new energy vehicles and the construction planning distribution of charging stations are highly coincident, and the distribution of charging stations has become one of the important factors affecting the sales of new energy vehicles. 1.2
Incentives for New Energy Electric Vehicles
In China’s manufacturing power strategy “Made in China 2025”, new energy electric vehicles are an important breakthrough direction, which requires that China’s new energy vehicle field has a complete and independent industrial chain that can be controlled [4]. International synchronization, the country attaches importance to new energy vehicles. Regarding the tax reduction policy for new energy vehicles, it was first piloted in cities such as Shanghai, Shenzhen, Hangzhou, Changchun, and Hefei, and then extended to the whole country. The cumulative adjustment of the subsidy standards was updated three times, not only with national support but also with local governments. With strong cooperation, new energy vehicle projects can also receive policy support funds of 1 billion to 2 billion yuan each year [5, 6]. In terms of industrial policy, China first implemented the new energy policy on new energy buses. In the 2012 “Notice on Expanding the Scope of Work on Expanding the Demonstration and Extension of Hybrid City Bus Demonstrations”, it was required to pilot the promotion of hybrid buses in major cities bus. Since then, related industry regulations such as infrastructure construction of new energy vehicle charging stations and charging piles, and product access standards have been introduced one after another. 1.3
New Energy Electric Vehicle Development Path
China’s new energy vehicle has two main development directions, one is the development path of electric vehicles based on lithium battery technology, and the other is the development path of electric vehicles based on hydrogen fuel cells. Due to China’s abundant lithium resources, lithium battery technology is also in the forefront of the world, and the new energy electric vehicle industry based on lithium batteries is developing rapidly [7]. Regarding the development path of new energy vehicles, China is relatively inclusive. It not only encourages the development of electric vehicles based
572
S. Huang
on lithium battery technology, but also encourages research and development breakthroughs based on hydrogen fuel cells. These two methods are still in the initial stage of development for China’s breakthrough of energy restrictions [8]. At present, there are probably two views on the development of new energy electric vehicles. The first is that the Ministry of Industry and Information Technology is more inclined to develop in many aspects. More importantly, it is necessary to cooperate with the energy conservation and emission reduction work of traditional cars to promote the energy conservation and emission reduction of fuels of traditional cars.
2 Model 2.1
Game Model of Electric Vehicle and Fuel Vehicle
As two types of vehicles, electric vehicles and fuel vehicles are highly competitive in the market. This article considers the game process in which consumers choose to purchase electric cars and fuel cars under the government’s participation in the formulation of policies for management, and establishes a revenue matrix as shown in Table 1. Table 1. Gaming matrix of electric vehicles and fuel vehicles Government input Government does not invest Electric car E þ I LðxÞ, −R + G E LðxÞ, 0 Fuel car F, −R − H F, −H
E represents the utility of electric vehicles to consumers. F represents the utility of fuel cars to consumers. I indicate government-managed rewards for electric vehicles. R indicates the cost of government management. G indicates the social benefits of electric vehicles in terms of energy conservation and environmental protection. H represents the loss of fuel cars in environmental pollution. L (x) indicates unsatisfactory losses to consumers due to immature electric vehicle technology. It can be concluded that the expected income of individuals buying electric cars and fuel cars S1 ; S2 and average expected income S are: S1 ¼ yðE þ I LðxÞÞ þ ð1 yÞðE LðxÞÞ ¼ yI þ E xyL
ð1Þ
S2 ¼ yF þ ð1 yÞF ¼ F
ð2Þ
S ¼ xS1 þ ð1 xÞS2
ð3Þ
Feasibility Analysis of Electric Vehicle Promotion
573
Expected benefits of the government choosing to support or not support the strategy Z 1 ; Z 2 and average expected return Z of the government are as follows: Z 1 ¼ xðG RÞ þ ð1 xÞðR H Þ ¼ xG þ xH R H
ð4Þ
Z 2 ¼ x 0 ð1 xÞH ¼ xH H
ð5Þ
Z ¼ yZ 1 þ ð1 yÞZ 2
ð6Þ
From Eqs. (1) to (6), we can obtain a replication dynamic equation to describe the process of repeated games among members:
2.2
dx ¼ xð1 xÞðyI þ E xyL FÞ dt
ð7Þ
dy ¼ yð1 yÞðGx RÞ dt
ð8Þ
System Equilibrium and Stability Analysis
Analyze the replicated dynamic Eq. (7) and set x ¼ 0; x ¼ 1; x ¼
dx dt
¼ 0, to obtain:
yI þ E F L
Let f ð xÞ ¼ xð1 xÞðyI þ E xyL FÞ, then 0
f ð xÞ ¼ ð1 2xÞð yI þ E F Þ ð2x 3x2 ÞyL
ð9Þ
Assume that the utility of consumers in purchasing electric vehicles plus government subsidies is greater than the utility of consumers in purchasing gasoline vehicles, that is, E + I > F. 0
0
(1) When 0\y\ FE I 时, f ð0Þ\0; f ð1Þ [ 0, and get x ¼ 0 is the evolutionary stability strategy, that is, when the government subsidy is less than a certain threshold At the same time, consumers eventually chose to buy fuel cars. 0 0 0 FE FE FE (2) When FE I \y\ IL , f ð0Þ [ 0; f ð1Þ [ 0; f IL \0, so x ¼ IL is an evolutionary stabilization strategy, that is, when the government subsidies are within a certain range, the proportion of consumers who choose to buy electric cars and consumers of gasoline cars will eventually stabilize On values between 0 and 1. 0 0 (3) When FE IL \y\1, f ð0Þ [ 0; f ð1Þ\0, we get x ¼ 1 as an evolutionary stabilization strategy, that is, the government subsidy is greater At a threshold, consumers ultimately choose to buy an electric car. This proves the necessity for the government to formulate policies to support the development of electric vehicles.
574
S. Huang R Analyze the replication dynamic Eq. (8), let dy dt ¼ 0, and when x ¼ G, all y are R stable; when x 6¼ G, y = 0, y = 1 are the stable points. 0 0 (4) When 0\x\ GR , f ð0Þ\0; f ð1Þ [ 0, and get x ¼ 0 is the evolutionary stability strategy, that is, when the proportion of consumers who choose to buy electric vehicles is less At a certain threshold, the government will not take any action. 0 0 (5) When GR \x\1, f ð0Þ [ 0; f ð1Þ\0, we get x ¼ 1 as an evolutionary stability strategy, that is, when the proportion of consumers who choose to buy electric vehicles Above a certain y threshold, the government will formulate policies to promote the development of the electric vehicle industry.
Solving the simultaneous replicated dynamic equations of formulas (7) and (8) gives the local equilibrium point of the dynamic system: (0, 0), (0, 1), (1, 0), (1, 1), (R/G, (FE)/(IR/GL)). According to the method proposed by freedman (1991), the stability of the system’s equilibrium point can be obtained by using the local stability analysis of the Jacobian matrix of the system. For the formulas (7) and (8), the differentials about x and y is obtained in order to obtain The Jacobian matrix is as follows: J¼
ð1 2xÞð yI þ E F Þ ð2x 3x2 ÞyLÞ xð1 xÞðI xLÞ Gðy y2 Þ ð1 2yÞðxG RÞ
This gives the Jacobian matrix determinant as DetðJÞ ¼ ðð1 2xÞð yI þ E F Þ 2x 3x2 yLÞð1 2yÞð xG RÞ xð1 xÞðI xLÞGðy y2 Þ ð10Þ And the trace of the Jacobian matrix is TrðJÞ ¼ ð1 2xÞð yI þ E F Þ 2x 3x2 yL þ ð1 2yÞðxG RÞ
ð11Þ
Substituting the values of each equilibrium point into formulas (10) and (11), the corresponding determinants and traces are obtained as shown in Table 2:
Table 2. Determinants and traces Equilibrium point DetðJÞ ð0; 0Þ ðE FÞR ð0; 1Þ ðI þ E FÞR ð1; 0Þ ðF EÞðG RÞ ð1; 1Þ ðF E I þ LÞðR GÞ FEÞG Þ ðGR ; ðIGRL
TrðJÞ EFR I þE F þR F EþG R F E I þLþR G
ðFEÞG ðFEÞRL Rð1 GR ÞðF EÞðIGRL Þ ðRG IGRL G Þ IGRL
Feasibility Analysis of Electric Vehicle Promotion
575
Table 3. Types of stability points for L I\0 Equilibrium point
G\R
G[R
L I\E F\0 E F\L I 0\E F
L I\E F\0 E F\L I 0\E F
ð0; 0Þ ð0; 1Þ
Stable point Unstable point
ð1; 0Þ
Saddle point
ð1; 1Þ
Saddle point
FE ÞG Þ ðGR ; ðIGRL
Unstable point
Stable point Unstable point Saddle point
Saddle point Unstable point Stable point
Stable point Unstable point
Saddle point Unstable point Saddle point
Stable point
Stable point Unstable point Unstable point Saddle point
Unstable point Unstable point
Saddle point Saddle point
Saddle point
Saddle point
Stable point
Unstable point
Saddle point
According to the stability theory of differential equations, the types of stable points of each point in different cases are determined by judging the Jacobian matrix determinant and the sign of the trace at each stable point (Tables 3 and 4). Table 4. Types of stability points for L I [ 0 Equilibrium point ð0; 0Þ ð0; 1Þ ð1; 0Þ ð1; 1Þ FE ÞG Þ ðGR ; ðIGRL
G[R
G\R E F\0
0\E F\L I L I\E F E F\0
0\E F\L I L I\E F
Stable point Unstable point Saddle point
Saddle point Unstable point
Unstable point Unstable point
Unstable point
Saddle point
Stable point Unstable point Unstable point Saddle point
Saddle point Unstable point
Stable point
Saddle point Unstable point Stable point
Saddle point
Saddle point
Saddle point
Saddle point
Saddle point Unstable point Saddle point
Saddle point
Stable point
Stable point
Stable point
In summary, the system has 4 stability points, which are ð0; 0Þ; ð1; 0Þ; ð1; 1Þ; FE ÞG ðGR ; ðIGRL Þ, its stable conditions are shown in Table 5: Table 5. Stability points and conditions Stable point Conditions ð0; 0Þ E F\0 ð1; 0Þ 0\E F; G\R ð1; 1Þ L I\E F\0; G [ R 0\L I\E F; G [ R ð R ; ðFEÞGÞ 0\E F; G [ R G
IGRL
576
S. Huang
The point ð0; 0Þ is the stable point. The evolution condition is E F\0, that is, when the consumer’s utility for purchasing an electric vehicle is less than the utility for a gasoline vehicle, the consumer ultimately chooses to buy a gasoline vehicle, and the government adopts a strategy that is not supported [9]. This situation is mainly because consumers are more inclined to purchase fuel-fueled vehicles when the efficiency of fuel-fueled vehicles is higher. This situation is in the initial stage of electric vehicles, indicating that if car manufacturers want to expand the electric vehicle market, they must invest in the research and development of electric vehicles to make them more effective for consumers in order to attract consumers to buy electric vehicles. Point ð1; 0Þ is the stability point. The evolution condition is 0\E F; G\R, that is, the utility of electric vehicles is greater than the utility of fuel vehicles, and the utility of government-developed policies is less than the cost required for policy formulation. For electric vehicles, the government has adopted an unsupported strategy. This situation is at the mature stage of electric vehicles. At this time, the technological level of electric vehicles has reached a certain level, and electric vehicle manufacturers can operate normally without the support of the government. Therefore, consumers ultimately choose to buy electric vehicles, and policies do not need to formulate special policies to promote the development of electric vehicles. The stable evolution condition of point ð1; 1Þ is L I\E F\0; G [ R, that is, the utility of the fuel car is greater than the utility of the electric car, and the difference between the utility of the fuel car and the electric car is less than the government reward and the consumer chooses When the difference between dissatisfied losses and the utility of the government’s policy formulation is greater than the cost required for policy formulation, consumers choose to buy electric vehicles, and the government formulates policies to support the development of electric vehicles. Or 0\L I\E F; G [ R that is, the utility of the fuel car is less than the utility of the electric car, the difference between the utility of the electric car and the fuel car is greater than the difference between the dissatisfaction of the consumer choosing the electric car and the difference between government subsidies, When the utility of the policy is greater than the cost of formulating the policy, consumers choose to buy electric vehicles, and the government formulates policies to support the development of electric vehicles [10]. FE ÞG The stable evolution condition of the point ðGR ; ðIGRL Þ is 0\E F; G [ R, that is, the utility of electric vehicles is greater than the utility of fuel vehicles, and the benefits of government-made policies are greater than at cost, the system converges to the point FE ÞG ðGR ; ðIGRL Þ. This situation occurred during the high-speed development of electric vehicles. At this time, due to technological advances, the utility of electric vehicles has gradually become greater than that of fuel-fueled vehicles. The government has further promoted the development of electric vehicles through a certain amount of subsidies. Consumers buy electric cars.
Feasibility Analysis of Electric Vehicle Promotion
577
3 Conclusions This article establishes a game model of electric vehicles and fuel vehicles, considering whether the government has formulated policies to participate in the management, and studies the consumer’s choice behavior in the face of two types of vehicles. By analyzing the model’s replicated dynamic equation, this paper obtains four cases where the equilibrium point is stable, and provides a certain feasibility analysis for the green promotion of electric vehicles. Acknowledgements. The author wish to thank referees for helpful comments that strengthened this paper.
References 1. Steininger, K., Vogl, C., Zettl, R.: Car-sharing organizations: the size of the market segment and revealed change in mobility behavior. Transp. Policy 3(4), 177–185 (1996) 2. Catalano, M., Casto, B.L., Migliore, M.: Car sharing demand estimation and urban transport demand modelling using stated preference techniques. EUT Edizioni Università di Trieste ISTIEE Istituto per lo studio dei trasporti nell’integrazione economica europea (2008) 3. Shaheen, S.A., Mallery, M.A., Kingskey, K.J.: Personal vehicle sharing services in North America. Res. Transp. Bus. Manag. 3, 71–81 (2012) 4. Schmöller, S., Bogenberger, K.: Analyzing External Factors on the Spatial and Temporal Demand of Car Sharing Systems. Procedia Soc. Behav. Sci. 111, 8–17 (2014) 5. Alencar, V.A., Rooke, F., Cocca, M., et al.: Characterizing client usage patterns and service demand for car-sharing systems. Inf. Syst. 101448 (2019) 6. Barth, M., Todd, M.: Simulation model performance analysis of a multiple station shared vehicle system. Transp. Res. Part C Emerg. Technol. 7(4), 237–259 (1999) 7. Xu, J., Lim, J.S.: A new Evolutionary Neural Network for forecasting net flow of a car sharing system. In: Proceedings of the 2007 IEEE Congress on Evolutionary Computation, 25–28 September 2007 (2007) 8. Ke, J., Zheng, H., Yang, H., et al.: Short-term forecasting of passenger demand under ondemand ride services: a spatio-temporal deep learning approach. Transp. Res. Part C Emerg. Technol. 85, 591–608 (2017) 9. Heilig, M., Mallig, N., Schröder, O., et al.: Implementation of free-floating and station-based carsharing in an agent-based travel demand model. Travel Behav. Soc. 12, 151–158 (2018) 10. Wang, N., Guo, J., Liu, X., et al.: A service demand forecasting model for one-way electric car-sharing systems combining long short-term memory networks with Granger causality test. J. Clean. Prod. 244, 118812 (2019)
The Intelligent Fault Diagnosis Method Based on Fuzzy Neural Network Kun Han1, Tongfei Shang1(&), Jingwei Yang2, and Yuan Yu3 1
College of Information and Communication, National University of Defense Technology, Xi’an, China [email protected] 2 78111 Troops, Chengdu, China 3 Xi’an Satellite Control Center, Xi’an, China
Abstract. The intelligent diagnosis technology of engines is developing towards automation and intelligence. This trend not only puts forward higher requirements for the safety and reliability of engines, but also promotes the progress of online engine operating condition monitoring and fault diagnosis and prediction technology. The paper makes full use of the respective advantages of fuzzy system and neural network, which can better intelligently diagnose engine faults. Finally, simulation analysis proves the effectiveness of the proposed method. Keywords: Fuzzy neural network
Intelligent fault Fault diagnosis
1 Introduction With the development of artificial intelligence, intelligent fault diagnosis has been widely used in various fields. Neural network is an information processing system established by imitating the working principle of biological nervous system, which can simulate the information processing function of biological nervous system, namely input and output characteristics, neurons The connection method, the connection weight and the neuron threshold are an adaptive nonlinear network system composed of a large number of the same simple basic units connected to each other [1, 2]. Among the commonly used neural networks, reverse neural networks are widely used because of their high reliability and easy implementation of algorithms. In the field of intelligent diagnosis of engine faults, reverse neural networks are used by many scholars. By combining different discharge characteristics and the network is combined to identify engine faults, and good results are obtained [3, 4]. The intelligent fault diagnosis method of engine based on fuzzy neural network proposed in this paper can effectively identify engine faults automatically and has great practical value.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 578–583, 2021. https://doi.org/10.1007/978-981-33-4572-0_83
The Intelligent Fault Diagnosis Method Based on Fuzzy Neural Network
579
2 Fuzzy Neural Network The main idea of combining fuzzy method and neural network method is to introduce qualitative knowledge under the framework of neural network, that is, by introducing fuzzy layer in the input layer and output layer of conventional neural network, the neural network and fuzzy logic are combined to make the network The learning process is quite different from the previous black box learning, and the network weight has a clearer physical meaning [5, 6]. This kind of network combines the advantages of neural network and fuzzy logic, making the two methods complementary to each other, and has many advantages compared with the previous single neural network and single fuzzy inference. The structural fusion of fuzzy logic and neural network is actually an equivalent fuzzy logic system. An adaptive fuzzy neural network is common. This kind of network has a strong learning ability and has the ability to self-adjust fuzzy rules, which is convenient for practical operation. Figure 1 is fuzzy rule input first-order model.
ω1 X
f1
Y
Z
ω2 X
Y
f2
Z
Fig. 1. Fuzzy rule input first-order model
The membership function used in the inference system is the zero-order or firstorder function related to the input of the system. It is easier to add adaptive capabilities during the creation process to make the system performance better [7, 8]. The inference mode rules of the fuzzy rule input first-order model are: if x is A and y is B then z ¼ f ðx; yÞ
ð1Þ
Where and is the fuzzy quantity of the premise, is the precise quantity of the conclusion, and is the first-order polynomial. We can use a two-input single-output fuzzy neural network to illustrate it, as shown in Fig. 2.
580
K. Han et al. The forth layer
The first layer
A1 x
y
A2
ω1 N ω1 ∏
B1
∏
B2
y
The third layer The second layer
x The fifth layer
ω1 f1 ∑
ω2 N ω 2
f
ω2 f 2 y
x
Fig. 2. Fuzzy neural network structure
Layer 1: The function of the nodes in this layer is to blur the data signal, and the output function of each node can be expressed as: O1;i ¼ lAi ðx1 Þ O1;j ¼ lBj ðx2 Þ
i ¼ 1; 2 j ¼ 1; 2
ð2Þ
Where A and B are the language representations of related nodes, Ai and Bj are fuzzy sets, lAi ðx1 Þ and lBj ðx2 Þ are the membership functions corresponding to their corresponding fuzzy sets. Layer 2: The output value corresponding to each node of this layer is the product of all its input signals, and the result is used as the excitation intensity of this rule: O2;i ¼ wi ¼ lAi ðx1 ÞlBi ðx2 Þ
i ¼ 1; 2
ð3Þ
Layer 3: The nodes of this layer mainly normalize the excitation intensity calculated in the previous layer. Each node obtains the ratio of the excitation intensity of this rule to the sum of all excitation intensity through calculations: O3;i ¼ wi ¼
wi w1 þ w2
i ¼ 1; 2
ð4Þ
Layer 4: This layer is used to calculate the output corresponding to each rule. All nodes use linear transfer functions. The mathematical expression is as follows: O4;i ¼ wi fi ¼ wi ðpi xi þ qi x2 þ ri Þ
i ¼ 1; 2
ð5Þ
In the above formula, if is a first-order linear function, then the system is a firstorder system, if it is a constant, it is a zero-order system. Layer 5: This layer is used to calculate the sum of the outputs passed by all layers as the total output of the system:
The Intelligent Fault Diagnosis Method Based on Fuzzy Neural Network
O5;i ¼
X
581
P
wi fi i wi fi ¼ P wi
i ¼ 1; 2
ð6Þ
i
3 Learning Algorithm If the output of the network has a linear relationship with some network parameters, the least squares method can be used to identify these parameters [9, 10]. For network output, it can be expressed as: o ¼ Fði; SÞ
ð7Þ
Where i is the input vector, S is the parameter set, and F is the overall function implemented by the adaptive network. Given the values of the elements in, you can get the matrix equation: Ah ¼ y
ð8Þ
If there are P groups of input and output data pairs, and given the premise parameters, use the least square method to get the best estimate h of the conclusion parameter in the sense of the smallest mean square error (kAh yk2 ): h ¼ ðAT AÞ1 AT y
ð9Þ
hi þ 1 ¼ hi þ Pi þ 1 ai þ 1 ðyTiþ 1 aTiþ 1 hi Þ
ð10Þ
Pi þ 1 ¼ Pi
Pi ai þ 1 aTiþ 1 Pi ; i ¼ 0; 1; . . .; P 1 1 þ aTiþ 1 Pi ai þ 1
ð11Þ
Where hi is the unknown parameter vector, ai is the ith row vector of A, yi is the ith element, and Pi is the covariance matrix. The initial conditions of the iteration are: h0 ¼ 0 P0 ¼ cI
ð12Þ
For the fuzzy neural network structure of Fig. 2, when the values of the premise parameters are fixed, the total output of the system can be expressed as a linear combination of the conclusion parameters. The output f of Fig. 2 can be rewritten as: f ¼
x1 x2 f1 þ f2 x1 þ x2 x1 þ x2
ð13Þ
This formula is a linear function of the conclusion parameter p1 ; q1 ; r1 ; p2 ; q2 ; r2 .
582
K. Han et al.
4 Simulation Analysis In fault intelligent diagnosis, the training samples are used for the construction and training of the classifier, and the test samples are used for testing the recognition effect of the classifier. The following will process the feature samples and obtain the training sample set and the test data set. During the experiment, 60 sets of typical data were selected for training data, verification data, and test data. The simulation results are shown in Fig. 3 and Fig. 4. It can be seen from the simulation results that the fuzzy neural network model can be very useful for training data. Good prediction output, the root mean square error of the training data is 0.115, the root mean square error of the verification data is 0.782, and the root mean square error of the test data is 0.824. A total of 100 sets of sample data are selected, and 60 sets of data are drawn from medium intervals as training data. The 60 sets of data include the conditions of the mill at various stages such as low, moderate, and full load, and have experienced coal addition, coal reduction, etc. As shown in Fig. 3 and Fig. 4. The process has a good representation.
Fig. 3. Fuzzy inference test data prediction output
Fig. 4. Training error
The Intelligent Fault Diagnosis Method Based on Fuzzy Neural Network
583
5 Conclusions Fuzzy neural network combines the advantages of both fuzzy logic and neural network, and has a good application prospect in the field of fault intelligent diagnosis. The fuzzy reasoning proposed in this paper has obtained very good recognition results in fault intelligent diagnosis, and its recognition result is better than the recognition results of traditional BP neural network and RBF neural network, and the intelligent fault diagnosis result of the method proposed in this paper is very close to the real value, with higher accuracy, making the recognition result more credible and reliable.
References 1. Yu, Z., Tian, Y., Xi, B.: Dempster-Shafer evidence theory of information fusion based on info-evolutionary value for e-business with continuous improvement. In: IEEE International Conference on e-Business Engineering, ICEBE 2005, 18–21 October 2005, pp. 586–589 (2005) 2. Xu, J., Zeng, Y., Hu, Y.: Positioning information fusion methods. In: Proceedings of the 2003 IEEE Intelligent Transportation Systems, 12–15 October 2003, vol. 2, pp. 1240–1245 (2003) 3. Sugeno, M., Kang, G.T.: Structure identification of fuzzy model. Fuzzy Sets Syst. 28(1), 15– 33 (1988) 4. Takagi, H., Hayashi, I.: NN-driven fuzzy reasoning. Int. J. Approximate Reasoning 5(3), 191–212 (1991) 5. Takagi, T., Sugeno, M.: Derivation of fuzzy control rules from human operator’s control actions. In: Proceedings of the IFAC Symposium on Fuzzy Information, Knowledge Representation and Decision Analysis, pp. 55–60, July 2016 6. Liu, C., Kong, L., Zhong, W.: Multi-information fusion based tumor cell of bone marrow involvement. In: Proceedings of the International Workshop on Medical Imaging and Augmented Reality, 10–12 June 2001, pp. 211–215 (2001) 7. Advantech Co., Ltd.: PCM-3718H/3718HGPC/104 12-bit DAS Module with Programmable Gain. User’s manual, 014:2–5 8. Javadpour, R., Knapp, G.M.: A fuzzy neural network approach to machine condition monitoring. Comput. Ind. Eng. 45(2), 323–330 (2003) 9. Yeo, S.M., Kim, C.H., Hong, K.S., et al.: A novel algorithm for fault classification in transmission lines using a combined adaptive network and fuzzy inference system. Int. J. Electr. Power Energy Syst. 25(9), 747–758 (2003) 10. Kaya, M., Alhajj, R.: Genetic algorithm based framework for mining fuzzy association rules. Fuzzy Appl. Ind. Eng. 201, 587–601 (2006)
The Demand Forecasting Method Based on Least Square Support Vector Machine Jing Liu1, Tongfei Shang1(&), Jingwei Yang2, and Jie Wu3 1
College of Information and Communication, National University of Defense Technology, Xi’an, China [email protected] 2 78111 Troops, Chengdu, China 3 Xi’an Satellite Control Center, Xi’an, China
Abstract. Support vector machine seeks the best compromise between model complexities and learning ability based on limited sample information, in order to obtain the best generalization ability, and has achieved good application effects in demand forecasting. In this paper, a least squares support vector machine is used to classify various factors that affect the consumption of ammunition. Experts will evaluate and score each factor. After preprocessing the data, the projection pursuit method is used to reduce the dimension of the evaluation data. The comprehensive evaluation index is obtained and used as an input item, and the actual consumption is taken as the ideal output to realize the forecast of ammunition demand. The simulation results prove the effectiveness of the method in this paper. Keywords: Support vector machine
Expert scoring Demand forecast
1 Introduction Scientifically and reasonably predicting the amount of ammunition based on the combat situation is an important link in ensuring the victory of the war. The support vector corresponding to the least squares support vector machine (LSSVM) model coefficients is continuous in the time series. According to the input and output data, we can establish a recursive model structure [1, 2]. This paper introduces the basic principles of LSSVM, analyzes the application and optimization method of LSSVM in nonlinear regression problems, and then selects relevant input signals to construct an ammunition demand forecast model based on LSSVM. The simulation results show that the model has the characteristics of simple model, high accuracy and good sensitivity.
2 The LSSVM Model From a given set of functions f ðx; aÞ, it can be expressed formally as follows: According to a set of independent and identically distributed observation samples l,
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 584–589, 2021. https://doi.org/10.1007/978-981-33-4572-0_84
The Demand Forecasting Method on LSSVM
585
find an optimal function formula f ðx; a0 Þ among a set of functions ff ðx; aÞg to estimate the response of the trainer to minimize the expected risk RðaÞ [3, 4]: Z RðaÞ ¼ L½y; f ðx; aÞdPðx; yÞ ð1Þ Among them, Pðx; yÞ is unknown. For the unknown probability distribution Pðx; yÞ, the risk function is minimized, but the sample information is limited. As a result, the expected risk defined by formula (1) cannot be directly calculated and minimized. Therefore, the traditional learning method replaces the expected risk defined by formula (1) with the empirical risk functional Remp ðaÞ: Remp ðaÞ ¼
l 1X Lðyi ; f ðxi ; aÞÞ l i¼1
ð2Þ
LSSVM maps the input vector to a high-dimensional feature space through a certain pre-selected nonlinear mapping, constructs the optimal classification hyperplane in this feature space, and uses the hyperplane for classification or fitting, as shown in Fig. 1 [5, 6]. y
output
α1 y1 K ( x1 , x)
α 2 y2 K ( x , x ) 2
support vector
input
α N yN
K ( xN , x ) ··· ···
x1
x2
··· ···
xN
Fig. 1. LSSVM model structure
LSSVM uses the following function to estimate the unknown function, yðxÞ ¼ wT uðxÞ þ b
ð3Þ
Among them, x 2 Rn , y 2 R, non-linear function uðÞ : Rn ! Rnh maps the input space to a high-dimensional feature space.
586
J. Liu et al.
Given the training data set fxk ; yk gNk¼1 , LSSVM defines the following optimization problem [5], N 1 1X min Jðw; eÞ ¼ wT w þ c e2 2 2 k¼1 k w;b;e
ð4Þ
Meet the constraints: yk ¼ wT uðxk Þ þ b þ ek k ¼ 1; . . .; N The following Lagrange function can be defined, Lðw; b; e; aÞ ¼ Jðw; eÞ
N X
ak wT uðxk Þ þ b þ ek yk
ð5Þ
k¼1
Where ak is the Lagrange multiplier. To obtain Lðw; b; e; aÞ the partial differentials of w; b; e; a respectively, the optimal conditions of (4) can be obtained as follows [6– 10], 8 N P > @L > ¼0!w¼ ak uðxk Þ > @w > > k¼1 > > < N P @L ¼ 0 ! w ¼ ak ¼ 0 k ¼ 1; . . .; N @b > k¼1 > > @L > > > @e ¼ 0 ! ak ¼ cek > : @Lk T @ak ¼ 0 ! w uðxk Þ þ b þ ek yk ¼ 0
ð6Þ
Use ak and b to represent the ek and w in the above formula, then
0 1v
1Tv X þ c1 I
" # b a
¼
" # 0 y
ð7Þ
Among them, y ¼ ½y1 ; . . .; yN , 1v ¼ ½1; . . .; 1, a ¼ ½a1 ; . . .; aN ; X is a square matrix, the elements of the row k and column l are Xkl ¼ uðxk ÞT uðxl Þ ¼ Kðxk ; xl Þ, k ¼ 1; . . .; N. Choose c [ 0 guarantee matrix
0 U¼ 1v
1v X þ c1 I
ð8Þ
Reversible, the analytical expressions for solving a and b can be obtained, " # " # b 1 0 ¼U y a
ð9Þ
Substituting formula (8) into formula (6) to obtain w, the nonlinear approximation of the training data set is obtained as follows:
The Demand Forecasting Method on LSSVM
yðxÞ ¼
N X
ak Kðx; xk Þ þ b
587
ð10Þ
k¼1
Among them, Kðx; xk Þ is the kernel function, which can be any symmetric function that satisfies Mercer’s condition? In LSSVM, because the value of Lagrange multiplier of many support vectors is equal to zero, it has sparseness, which greatly increases the promotion performance of LSSVM. In LS-SVM its support vector value is, so that even if it is small, it will not be equal to zero, which makes the algorithm no longer sparsity.
3 Simulation Analysis The factors that affect the amount of ammunition are divided into three categories: the natural conditions of the battlefield, the scale of the battle, and the equipment situation. The natural conditions of the battlefield include terrain, environment, temperature, humidity, and weather; the scale of the battle includes the number of participants, the number of light weapons, the number of heavy weapons, the duration, and the intensity of the battle; the equipment includes the technical performance of equipment, the failure rate of ammunition, and materials There are five aspects: loss on the way, the rate of pre-war equipment intact, and the quality of personnel involved in the war. LSSVM has no sparsity. The first training process will generate 240 support vectors. According to the improved sparsity and robustness algorithm mentioned above, continue to train the model. When the model performance drops to 75% of the initial performance Stop training. At this time, only 65 support vectors are left. Under the condition that the model performance is not greatly reduced, the calculation process is greatly simplified and the speed is accelerated. After the training is completed, the verification data is used to predict the ammunition demand. The result is shown in Fig. 2. The training error is shown in Fig. 3.
6000 5500 5000 0 40
50
100
150
200
250
300
350
0 0 5000
50
100
150
200
250
300
350
0 0 200
50
100
150
200
250
300
350
50
100
150
200
250
300
350
50
100
150
200
250
300
350
50
100
150
200
250
300
350
20
0 -200 0 400 200 0 0 100 50 0
0
Fig. 2. Demand forecast
588
J. Liu et al.
Fig. 3. Training error
4 Conclusions The paper will use the least squares support vector machine to forecast the total ammunition demand, try to avoid the interference of personal subjective factors, the forecast is relatively objective and credible, the forecast effect is better, the forecast process is clear, the operability is strong, and it is easy to realize by computer programming. It has strong practical application value.
References 1. Kayano, D., Silva, M.S., Magrini, L.C.: Distribution substation transformer and circuit breaker diagnoses with the assistance of real-time monitoring. In: IEEE Transmission & Distribution Conference & Exposition (2014) 2. Banak, P., Rebuck, J., Eaves, C., Troia, M.: The installation of an advanced operating and management system for reduced operating and maintenance expense in a cement plant. In: IEEE-IAS/PCA 42nd Cement Industry Technical Conference, 7–12 May 2000, pp. 113–132 (2000) 3. Hong, M.H., Bickett, A.D., Christiansen, E.M.: Learning grammatical structure with Echo State Network. Neural Netw. 20(3), 424–432 (2007). Special Issue 4. Corchado, J.M., Fyfe, C.: Unsupervised neural method for temperature forecasting. Artif. Intell. Eng. 13, 351–357 (1999) 5. Suykens, J.A.K., De Brabanter, J., Van Gestel, T.: Least Squares Support Vector Machines. World Scientific, Singapore (2002) 6. Suykens, J.A.K., Lukas, L., Vandewalle, J.: Sparse approximation using least squares support vector machines. In: IEEE International Symposium on Circuits and Systems (ISCAS 2000), Geneva, Switzerland, vol. II, pp. 757–760 (2000) 7. Suykens, J.A.K., De Brabanter, J., Lukas, L., Vandewalle, J.: Weighted least squares support vector machines: robustness and sparse approximation. Neurocomputing 48(1–4), 85–105 (2002)
The Demand Forecasting Method on LSSVM
589
8. Zhang, J.: The sample breakdown points of tests. J. Stat. Plan, Inference 52(2), 161–181 (1996) 9. Yeo, S.M., Kim, C.H., Hong, K.S., et al.: A novel algorithm for fault classification in transmission lines using a combined adaptive network and fuzzy inference system. Int. J. Electr. Power Energy Syst. 25, 747–758 (2015) 10. Kaya, M., Alhajj, R.: Genetic algorithm based framework for mining fuzzy association rules. Fuzzy Appl. Ind. Eng. 201, 587–601 (2006)
Modernity of Ancient Literature Based on Big Data Luchen Zhai(&) International Media, Literature, Tangxin Wealth Investment Management Co., Ltd., Cangzhou, Hebei, China [email protected] Abstract. Now, data is in an explosive increase in the “big data (BD) era”. Information technology has greatly promoted the research of ancient literature, mainly in the technology of data collection and data retrieval. Based on big data, this paper uses data mining technology to study the modernity of ancient literature. This paper uses data mining technology to study the modernity of ancient literature through the establishment of mathematical analysis models of ancient literature and the Cholsky decomposition method. This paper conducts data mining on large-scale ancient literary works to find out the potentially valuable information hidden in the literature. Through the analysis of 6359 documents, this paper uses data mining technology to find 659 useful information from 6359 documents. Through experimental research, it is found that BD technology has certain advantages in the study of the modernity of ancient literature, which will promote the common development of ancient literature and modern literature. Keywords: Data analysis
Ancient literature Data mining Big data
1 Introduction BD is a comprehensive information analysis process based on network technology, data technology, and overall analysis and thinking. BD is not a simple process. Realtime analysis technology that requires a lot of technology and software support. With the continuous improvement and development of science and technology in China, BD, as a method of analyzing and understanding important information, has relative importance and effectiveness. “BD” clearly shows the effects of informatization through the collection, storage, and analysis of related data and information, and ultimately provides the basis for correct analysis of data [1]. Moreover, BD mainly has a wide range of applications such as enterprise production, medical fields, commercial markets, and management information. The emergence of BD provides a certain degree of academic support for the modern research of literature. First of all, BD is 5V, that is, massive, diverse, high-speed, accuracy, and value. The value and problems behind a large amount of information are discovered through technical processing. Schönberg once said that BD does not refer to the use of random analysis, but the method of using all data. This is generally referred to as “full data”. This working method is suitable for modern research needs to explore © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 590–596, 2021. https://doi.org/10.1007/978-981-33-4572-0_85
Modernity of Ancient Literature Based on Big Data
591
“large-scale” literary phenomena and provide new technical support. In the 1980s when there were relatively few research objects, serious researchers would search the works of writers at the time, and perhaps draw convincing conclusions based on accurate reading. However, with the rapid development of literature today, more than 5,000 paper novels are officially published every year [2]. Even if researchers do their best, it is difficult to obtain all relevant research materials for discussion. Therefore, many researchers adopt the “sampling analysis” method based on personal reading experience. Due to the relatively large subjective factors, it is impossible or impossible for the person in the information age to collect all the data. Its judgments are mostly random, self-centered, superficial, and very random, which will naturally lead to the confusing and insufficient persuasiveness of research conclusions, which is not applicable in modern literary research [3]. However, the emergence of BD can partially solve this problem. It is relatively quick and simple to grasp the overall image of a specific writer or phenomenon through keyword search, database query, and link tracking of writers' works. Compensation is only based on personal experience and limited by partial samples. This has improved the scientificity and accuracy of the research, changed the current ambiguity and randomness of modern literature research, and provided specific technical support for the intervention in the field of literature in the era of knowledge explosion [4]. This article studies ancient literature through BD analysis of ancient literature, data collection system and data retrieval system. Combine ancient literature with modern technology, extract useful information from the data mining of ancient literature, and conduct combined analysis with modern literature. This article will be divided into two aspects of research. Data analysis of structured text is a relatively fixed problem. If you use technical means to process, you can easily find structural features such as tone, flavor, rhythm, and sub-order when processing. Because this style of style is very convenient, it is easy to achieve technical intervention. Data mining of unstructured text is to conceal previously unknown and potentially valuable information in the data through the mining of large-scale, chaotic, disorderly, and unstructured data text.
2 Methods 2.1
Data Mining Technology
The data mining process first selects the required data samples from the database, and then sorts the data in the samples according to specific needs. The preprocessed data is adjusted according to the principles and standards of data mining. The adjusted data in the sample is brought in and analyzed using the statistical and probabilistic light box light model, and finally the data is evaluated for finding the gaps. The data obtained in this way is the data after data mining [5, 6]. Cholisky decomposition method needs to establish a mathematical model and calculate the coefficients. Randomly select model parameters y and m self-parameters from the data sample Xm1 , ….. X0 , to sort these parameters and analyze the relationship, which can be expressed by the formula (1):
592
L. Zhai
y ¼ a0 x0 þ a1 x1 þ ::: þ am1 xm1 þ am
ð1Þ
Bring these parameters into the formula for linear analysis, where a0 , a1 , …, a m 1 , am are the coefficients of the equation as fixed values. According to the linear calculation rule of mathematics, it must be calculated q¼
n1 X
½yi ða0 x0 þ a1 x1 þ :::: þ am1 xm1 þ am Þ2
ð2Þ
i¼0
The minimum value of and the parameter coefficients, …, must be able to adapt to the following linear equations: 1 1 0 y0 a0 B a1 C B y1 C C C B B B y2 C T B a2 C C C B B CC B C ¼ C ¼ B :::: C C B :::: C B @ am1 A @ yn2 A am yn1 0
2.2
ð3Þ
Data Mining of Ancient Literature
The data mining process is based on the in-depth understanding of the data object and the data mining method of the data object for object selection [7]. Therefore, it is necessary to fully understand the business field, learn the background knowledge of industry data objects, clarify the purpose of data analysis, combine data mining methods, statistical analysis techniques with professional field knowledge and technology, and allows enterprise data mining applications to reflect value. Data mining of unstructured text is through the mining of large-scale, messy, disorderly, and unstructured data text, to find out the previously undiscovered and potentially valuable information in the data. Data mining of unstructured text is carried out through the combination of document editing, data mining and GIS, and the integration of literature, history and philosophy from the perspective of BD. Structured texts such as poems, rhyme, rhythm, etc. are externally analyzed. For example, in poetry, prose poems, subtitle poems, sub-rhyme poems, etc., are technically vulnerable to technical interference due to their characteristic symbols. You only need to extract the title format such as “Fu x De x”, “Fu De x”, and you can get a rough poem. Collecting the above poems collectively over a period of time can summarize the poet’s author, theme, genre, rhyme, retention, etc., which can form more accurate data, which is conducive to directly understanding the communication status of the poem expression and the situation of singing to the poet, Poetry collections and other studies are useful [8]. Academia has fully studied specific types of poems such as frontier fortress poems, epic poems, and idyllic poems. The more adequate the research, the more mature the conditions for using automatic research technology. The subject research of poetry
Modernity of Ancient Literature Based on Big Data
593
terminology is to combine types of books and other dictionaries and other external auxiliary documents into the learning range of the machine, with the ability to determine the subject, and extract a specific type of work on a specific subject from multiple texts. Learning from the existing research results in this academic field, the machine forms a machine judgment system based on the classification, judgment, and research methods of various scholars who assist literature learning [9]. In the face of a dynasty that has not yet been cultivated, the machine expects to realize its own judgment. The extraction of stylistic interactions and speech patterns is a combination of data mining and inter-text theory, mining text details, refining speaker rules, and designing models, which can further advance this research. The goal of the next technology is to use deep learning machines to gradually generate the refinement capabilities of speech models and summarize the subtle gaps between these models [10].
3 Experiment 3.1
Sample Data
This paper analyzes 6359 documents to study the influence of data mining on ancient literature. Divide 6359 documents into five groups for data mining. The first group contains 1000 documents, the second group contains 2630 documents, the third group contains 3695 documents, the fourth group contains 4935 documents, and the fifth group contains 6359 documents. The first group of documents excavated 203 pieces of information, the second group of documents produced 386 pieces of information, the third group of documents produced 413 pieces of information, the fourth group of documents produced 534 pieces of information, and the fifth group of documents produced 669 pieces of information. It is found that the more documents there are, the more information can be unearthed. By comparing the information previously mined manually, the information mined by BD should be more accurate and reliable. 3.2
Modern Data Mining
Data mining is the process of extracting potentially useful information and knowledge hidden in it from a large, incomplete, noisy, fuzzy, and random database. It involves many aspects of the corresponding technology. Ancient literature is researched through modern technologies such as data mining. Combine ancient literature with modern technology, extract useful information from the data mining of ancient literature, and conduct combined analysis with modern literature. It can not only promote the development of ancient literature, but also promote the development of modern literature. Make ancient literature easier to be accepted by people, and it has always existed in modern society, and it has developed along with it.
594
L. Zhai
4 Discussion 4.1
The Impact of BD Analysis on Ancient Literature
Due to lack of support for related “software”, some ancient literature research institutions in China are unable to conduct good data mining on ancient literary works. Software support is an important foundation for the display and operation of BD. Through analysis, we found that constructing a relatively complete and efficient “BD” management system into “BD” for literary research is conducive to the collection and preservation of literary documents and materials, and can prevent the frequent occurrence of problems such as collection distortion. The application of modern technology to the data mining of ancient literature shows that modern technology promotes the development of ancient literature and promotes the development of ancient literature in modernization. It can be seen from Table 1 that with the continuous development of BD, the use of BD in ancient literature has become more and more widespread. The collection of ancient essays extracted by BD has grown explosively from 4566 in 2014 to 29368 in 2019. Through data retrieval and data collection technology, ancient literature can be better excavated and analyzed. This is much more efficient than manual retrieval and analysis. Table 1. Collections of ancient essays extracted by “BD” Year 2014 2015 2016 2017 2018 2019 Quantity of paper 4566 9875 10562 12563 16693 29368
4.2
Application of BD Analysis in the Modernization of Ancient Literature
Through data collection technology, first collect the data of ancient literature, then extract useful information through data analysis and data mining technology, and finally extract the information we need through data retrieval technology. Data mining technology can extract valuable information in ancient literature, and can even unearth connections between certain information. The objects of ancient literature research we face are no longer individual individuals, but groups centered on them. Different groups have different intersections. In a sense, BD can even locate each Song person in the literature in a specific network of relationships. These are all realized by data mining. As shown in Fig. 1, we can see that data mining and a lot of unpredictability. It is impossible to guess how many pieces of useful information can be unearthed from the number of anthologies. But one thing we can know is that the more essays together, the more valuable information can be unearthed. And by comparing with manual mining information, it is found that the efficiency of manual mining is much lower than that of data mining. Although it is impossible to know the amount of information excavated, it is also the charm that technology has given to the informationization of ancient literature. In the face of data mining, it is necessary to manually customize enough restrictions to decompose unstructured text into semi-structured text or structured text
Modernity of Ancient Literature Based on Big Data
595
Fig. 1. Data mining in ancient literature
that the machine can understand, and make the machine more and more intelligent through deep learning, so it is not necessary to disassemble Divided into unstructured text. It can be seen from Fig. 2 that the useful information from data mining accounts for 44% and the hidden information accounts for 13%, indicating that most of the information from data mining is valuable and useful to other documents in ancient literature. The information excavated from ancient literature can be used by modern literature, which embodies the combination and development of ancient literature and modern literature.
Fig. 2. Information value from data mining
596
L. Zhai
5 Conclusions This paper conducts modern research on Guda literature through data analysis of structured text and data mining of unstructured text. Let ancient literature collide with modern technology and create a different spark. The development of information technology has brought great changes to the traditional life mode, and also has a great influence on the research mode of ancient literature. We can analyze ancient literature through BD technology, which is much more convenient and efficient than manual analysis, and even more data than manual analysis. These are the changes brought about by information technology. Based on the analysis of BD, this article conducts modernized research on ancient literature. The results show that BD technology has a positive impact on the analysis of ancient literature. Data mining technology can be used to extract information that is not easy to find in the literature, which is very valuable. Using data to mine the information of ancient literary works requires deep learning by machines, simulating the human brain for learning and analysis, and interpreting the data. Therefore, the machine must be given enough text, so that its learning function will be stronger, and the conclusions drawn will be reliable and credible.
References 1. Mavriki, P., Karyda, M.: Automated data-driven profiling: threats for group privacy. Inf. Comput. Secur. 28(2), 183–197 (2019) 2. Wuming, G., Il-Youp, K., Naoko, K.N., et al.: TCM visualizes trajectories and cell populations from single cell data. Nat. Commun. 9(1), 2749 (2018) 3. Jiang, W., Zhu, J., Xu, J., et al.: A feature based method for trajectory dataset segmentation and profiling. World Wide Web-Internet Web Inf. Syst. 20(1), 1–18 (2017) 4. Chong, J., Liu, P., Zhou, G., et al.: Using MicrobiomeAnalyst for comprehensive statistical, functional, and meta-analysis of microbiome data. Nat. Protoc. 15(3), 799–821 (2020) 5. Vogt, M., Jasial, S., Bajorath, J.: Extracting compound profiling matrices from screening data. Acs Omega 3(4), 4706–4712 (2018) 6. Zhang, X.: Research on data mining algorithm based on pattern recognition. Int. J. Pattern Recognit. Artif. Intell. 34(06), 349–355 (2020) 7. Marozzo, F., Talia, D., Trunfio, P.: A workflow management system for scalable data mining on clouds. IEEE Trans. Serv. Comput. 11(3), 480–492 (2018) 8. Gibert, K., Izquierdo, J., Sanchez-Marre, M., et al.: Which method to use? An assessment of data mining methods in Environmental Data Science. Environ. Modell. Softw. 110, 3–27 (2018) 9. Peng, Y., Lin, J.R., Zhang, J.P., et al.: A hybrid data mining approach on BIM-based building operation and maintenance. Build. Environ. 126, 483–495 (2017) 10. Chien, C.F., Huang, Y.C., Hu, C.H.: A hybrid approach of data mining and genetic algorithms for rehabilitation scheduling. Int. J. Manuf. Technol. Manage. 16(1), 76–100 (2017)
Summary of Data Races Solution Algorithms for Multithreaded Programs Chun Fang(&) School of Computer Science, Hubei University of Technology, Wuhan 430068, China [email protected]
Abstract. As we know that the multithreaded programs are easily to produce data races during the running, and it is really hard for us to locate the position where the mistakes are. After reading the Eraser: A Dynamic Data Race Detector for Multithreaded Programs which introduces “Lockset” algorithm by explaining the shortcomings of the classic “Happens-Before” algorithm [1], and then optimizes the tool Eraser on the basic “Lockset” algorithm, I have summarized the paper and put forward my own views. Keywords: Happens-before
Lockset Eraser
1 “Happens-Before” Algorithm 1.1
Features of Algorithm
If a synchronization object is accessed by two threads at the same time, and the synchronization semantics forbids the use of a time series in the thread, the thread accessed first happens-before the thread accessed after. Although the lock is determined by the thread in which it is held, unlike a semaphore, I think this is very similar to what the semaphore mechanism in an operating system does. Happens-before algorithm has the following characteristics: 1. It is difficult to implement effectively, because this algorithm needs to access each Shared memory. 2. It relies too much on the scheduler. The shortcoming is fatal. When the instruction sequence is written, the bottom layer may optimize and reorder them, so the former order is out of order, and the correctness is difficult to guarantee.
2 The Lockset Algorithm 2.1
General Idea of the Algorithm
To put it simply, the lockset algorithm expects a Shared resource to always be protected by one or more locks. Two versions of lockset algorithm were introduced, the simplest version and the improved version respectively. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 597–601, 2021. https://doi.org/10.1007/978-981-33-4572-0_86
598
2.2
C. Fang
The Simplest Version
In the simplest version, a Shared variable is initialized, and then the Eraser maintains a collection of all possible locks held by it. Every time the Shared variable is accessed, the lockset updates the possible lock held by the Shared variable to the intersection of the possible lock held by the current thread [2]. If a lock always protects the Shared variable, the lock will always exist in the lockset every time an intersection operation is performed, and if the lockset is empty, the Shared variable is not protected by any lock. However, it is obvious that the simplest Lockset algorithm is too strict. Under the rules of this algorithm, there will be three kinds of false reported data races caused by noncompliance with the requirements of the algorithm: 1. When Shared variables are initialized without locks, then lockset algorithm is used at the beginning, it intersects all possible locks with the lockset of the thread to obtain an empty set [3], and then an error will be reported; 2. The Shared data is only written at the beginning, while the following are all read operations. However, according to the above, read operations can be accessed without locks, so the intersection operation will also be an empty set, resulting in an error reporting; 3. Read-write lock allows multiple reads but only one write operation. 2.3
Improved Version
Because the simplest version of lockset algorithm is too strict, an improved algorithm is given in the literature [4]. That is, if only one thread accesses the Shared data or if multiple threads perform read-only operations on the Shared data, then there will be no data races in these cases, so a data race should only be reported after an initialized variable has been written by multiple threads. In this way, the improved algorithm simplifies the original algorithm and can avoid some false reports caused by too strict requirements. 2.3.1 Problems with the Improved Version If one thread allocates and initializes a Shared variable without any lock, and lets a second thread access the Shared variable immediately after initialization, it causes an error. However, the Eraser will not be able to detect this error unless the second thread accesses it before the Shared variable is initialized, so the algorithm has some flaws. This is also why the Lockset algorithm cannot detect all race conditions. 2.3.2 Further Improvements to the Improved Version To suit the style of many programs that use single-write locks, multiple-read locks, and simple locks, one last refinement to the lockset was given. Each time a Shared variable is written, Lock must protect it with a write mode [2]. Each time a Shared variable is read, lock protects it by reading or writing. Because a lock held by a write operation does not prevent data race between the write operation and some other read operations, when a write operation occurs, a lock held in pure read operation mode is removed from the candidate lockset. See Fig. 1 below.
Summary of Data Races Solution Algorithms for Multithreaded Programs
599
Fig. 1. Further improvement of lockset algorithm
3 Concrete Implementation of “Eraser” The algorithm mainly needs to look for candidate locksets, because it turns out that only a few different locksets appear in this algorithm’s execution, so the algorithm only opens up a small memory space to hold the lockset. [5, 6] To ensure each lockset is unique, a hash table of lock vectors was maintained, searching the hash table each time before creating a new lock set. In this way, even if the cache fails, the comparison can be made with two simple sort vectors. Then, how to find the lock direction scale? It is mentioned in the literature that each 32-bit word in the data segment and heap has a corresponding Shadow word. The 30 bits in the Shadow word are used to represent the index of the lock set, and the remaining 2 bits are used to represent the state. In other word, we just need to find the shadow word to find the locking scale. And how to find this shadow word? All standard memory allocation cases mentioned in literature are for initialization program distribution of each word and a shadow, and when a thread access a memory location, it through this location address and add a fixed displacement will find the shadow word, the operation process and computer composition principle of indexed addressing some similar algorithm. What happens if there are false reports? There are three main types of false positives: 1. Memory reuse. When a thread shares a resource with another thread and no longer uses it, the other thread modifies it and reports an error. Just like the private allocator in many programs mentioned in this paper, threads have their own area and interact with main memory independently in JAVA; 2. Private locks. Because it does not report lock information to the algorithm at run time, the algorithm does not know the specific condition of the lock. 3. Healthy competition. Some data races are deliberate, but this race does not affect the correctness of the program.
600
C. Fang
This article introduces program annotations for these three common types of false reports. With these annotations, we can reduce the processing time for these false reports and discover the real data races earlier.
4 Experience with Eraser According to the paper, the experience of running the Eraser in a large project such as a search engine or a student’s assignment shows that the Eraser can find real data race to a great extent and improve the problems caused by previous algorithm. In additional experience, multiple lock protection and deadlocks are mentioned in the literature. Multi-lock protection means that multiple locks are required to protect write Shared variables, rather than one lock protects a Shared variable. Under this requirement, each thread that writes a variable must hold all the protection locks, while the process that reads a variable must hold one. The purpose of using multi-lock protection is not to increase concurrency, but to avoid deadlocks for programs that contain upregulation. Avoiding deadlock is another common topic, and the way we learn to avoid deadlock in operating systems is “banker algorithms”, which look for possible sequences of security permits. One way to avoid this mentioned in the literature is to select a partial order among all locks and, when it holds more than one lock, obtain them in ascending order, which I think is similar to the banker’s algorithm.
5 Summarize and My View Data race will lead to our program data chaos, thread uneasy congruent problems, while the Eraser goal is to detect the real multithreaded dynamic data error of competition, it uses the lockset algorithm to achieve the purpose of check the data competition, the lockset is different from the previous “happens-before” algorithm, it can be too affected by scheduling machine, allowing the underlying rearrangement and does not affect the final run results. [2] Without such tool, it would be difficult for us to do such task just using the previous algorithm, because it would take a lot time to access every memory space, and we might detect some false alarms. If my program has a high probability of multithreading, or if sharing many variables is important, I would consider using The Eraser to detect data competition issues. I thinks that the dynamic data race detection should be multi-threaded specification test standard procedure in the work, with the scope of application of expansion of multithreading, data unreliability is increased in competition, unless there are better ways to eliminate them, then the lockset algorithm can be well performed in the data races problems, enough for the users to solve the problem of data races as soon as possible.
References 1. Savage, S., et al.: Eraser: a dynamic data race detector for multithreaded programs. ACM Trans. Comput. Syst. (TOCS) 15(4), 391–411 (1997)
Summary of Data Races Solution Algorithms for Multithreaded Programs
601
2. Yu, M., Lee, J.-S., Bae, D.-H.: AdaptiveLock: efficient hybrid data race detection based on real-world locking patterns. Int. J. Parallel Prog. 47(5–6), 805–837 (2019) 3. Kusiak, A.: Fundamentals of smart manufacturing: a multi-thread perspective. Annu. Rev. Control 47, 214–220 (2019) 4. Bonizzoni, P., Della Vedova, G., Pirola, Y., Previtali, M., Rizzi, R.: Multithread multistring Burrows-Wheeler transform and longest common prefix array. J. Comput. Biol. 26(9), 948– 961 (2019) 5. Zhang, T., Jung, C., Lee, D.: ProRace: practical data race detection for production use. ACM SIGPLAN Not. 52(4), 149–162 (2017) 6. Yu, M., Park, S.M., Chun, I., Bae, D.H.: Experimental performance comparison of dynamic data race detection techniques. ETRI J. 39(1), 124–134 (2017)
Implementation of Stomatological Hospital Information Ying Li(&), Yibo Yang, Ziyi Yang, and Quanyi Lu School of Information and Control, Shenyang Institute of Technology, Shenyang, Liaoning, China [email protected]
Abstract. With the development and application of software technology, the requirements for software development are constantly improving. The application of software architecture style and quality attributes should be considered while developing software. This system refers to much famous stomatological hospital portal websites in China. In addition to the basic functions of hospital introduction, department doctor introduction, appointment registration and other basic functions, patients can also check their own examination reports, prescriptions, etc., which has strong regional and group characteristics. We should vigorously use Internet technology to build an information platform to better provide oral diagnosis and treatment services for patients. The stomatological hospital information platform is a multi-layer distributed enterprise application program generated by. Net platform. It uses B/s, warehouse, multi-layer, objectoriented and other architecture styles, and adopts corresponding strategies to improve the overall quality of the software in terms of usability, modifiability and other quality attributes. Keywords: Architecture
Stomatology Hospital NET platform
1 Background Refer to the portal websites of famous stomatological hospitals such as Beijing Stomatological Hospital, Liaoning stomatological hospital and Zhejiang stomatological hospital. Most stomatological hospitals only have hospital introduction, department doctor introduction, hospital news, appointment registration also need other platform support, patients are unable to view their own examination report, prescription sheet, if patients do not keep paper voucher, they cannot see their own medical treatment in the future. Stomatological hospital information platform, using ASP.NET MVC three-tier architecture is connected with wechat applet data, visual studio 2017 development platform and SQL Server database [1], which realizes various services of stomatological hospital on Web and wechat applet, breaks the traditional mode of stomatological hospital, adds nursing group purchase, smart pharmacy and other functions, focusing on oral health. Patients, doctors, outpatient departments, pharmacies and administrative staff use different operating systems and mainstream front-end framework, which can be used in mobile browser and wechat applet. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 602–609, 2021. https://doi.org/10.1007/978-981-33-4572-0_87
Implementation of Stomatological Hospital Information
603
2 Application of Software Architecture In the current software architecture design, hierarchical structure is the most frequently used and the best application effect in the actual operation of developers. The information platform of stomatological hospital maintains the idea of hierarchical software architecture ASP.NET MVC architecture focuses on separation and divides the interface into controller, view and model. Compared with traditional web form, it only separates the page from the post code, which is more thorough. MVC (model view controller) is a popular software design pattern, which has been widely used in various application systems. It consists of three components [2]. MVC pattern separates user’s display (view) from action (controller), which improves the reusability of code; separates data (model) from action (controller) to design system independent of data storage; its essence is to solve the problem of coupling system [3]. When right ASP.NET When an MVC application initiates a request, the request will be intercepted by the urlroutingmodule HTTP module, encapsulating the current httpcon text and passing it to the previously created routing table. Httpcon text contains the URL, form parameters, query string, and cookie associated with the current request [4]. Oral hospital information platform stores data in SQL In server, when the page triggers the query and modifies the operation, it calls the method layer by layer, interacts with the user through the procedure call or similar protocol, maintains the unique interface to the shared function, uses the data access layer to operate the database through SQL statements, and applies the warehouse system style and objectoriented architecture style with data as the center to the system.
3 Function Design and Implementation The information platform of stomatological hospital is composed of seven systems: official website, personal center, doctor operating system, outpatient payment system, pharmacy management system, and administrative management system and wechat small program. In order to facilitate patients to see a doctor, the bootstrap front-end framework is used in the website development, which can make the website display adaptively in the mobile browser. At the same time, increase the construction of wechat applet, patients can use most of the functions of the computer terminal on the mobile terminal. The functional module of the official website of the hospital is shown in Fig. 1. 3.1
Official Website of Stomatological Hospital
In the official website of stomatological hospital, users can view the detailed information of the existing doctors and departments in the hospital for patients to view when they visit the hospital. Users can use the diagnosis and treatment assistant to view hospital news, medical guidance, symptom guidance, appointment registration, online consultation with doctors and check doctor’s Q & A situation. In group purchase nursing, users can view and group purchase the oral care products of the hospital, and
604
Y. Li et al.
check and select oral products in the product store. In smart pharmacy, users can query drugs according to their own symptoms. Pharmacies provide detailed instructions, similar recommendations, check patient feedback and consult doctors about medication. Add the selected drugs into the medicine box add or reduce or remove the drugs, and confirm the order for the needed drugs.
Fig. 1. Functional module diagram of hospital official website
3.2
Patient Personal Center
Users can register as a doctor in stomatological hospital for the first time and obtain the medical card, which is convenient to use the hospital services. Enter the correct user name and password to log in, and the status changes to login status, and you can enter the personal center. Patients can check their own appointment, and cancel the appointment that cannot reach the hospital. Patients can view their own medical records, evaluate doctors, and view their own electronic medical records. Patients can view their own group purchase orders, cancel orders, and evaluate the completed care. After purchasing a drug, it can be divided into three states: to be delivered and in transportation. The order that has not been delivered can be cancelled. The order in transportation can be confirmed to receive. In the completed order, the medication feedback can be submitted. The details of the electronic case are shown in Fig. 2. 3.3
Wechat Applet
Wechat applet is developed according to most services provided by the hospital for patients on the website. In the applet, JS is used to call the interface, and the controller will return data according to the method, so that the data of the website can be shared with the small program. Wx: wx.request ({ url:‘ https://www.sit1th.cn/User/WXUserLogin ’}), / / wechat applet interacts with the background through the request address.
Implementation of Stomatological Hospital Information
605
Fig. 2. Electronic medical record information
return Content(new AjaxResult {state = ResultType.success.ToString (), message = “true”, data = user}. Tojson()); / / encapsulate Ajax result in the controller to return background data. Users can use the diagnosis and treatment services provided by the hospital on the applet. After registration and login (account number is common), you can use the functions of appointment registration, outpatient payment, pharmacy drug taking, appointment management, health data and other functions, so as to facilitate patients to seek medical treatment. The website and app doctor details are shown in Fig. 3.
Fig. 3. Website and app doctor details
606
3.4
Y. Li et al.
Outpatient Payment System
The outpatient payment system is used by the staff of the payment Office of stomatological hospital. The main function of the system is that the staff in the registration module can register the patients registered on site. The patients register with their ID cards. The patients who visit for the first time will automatically register the medical card. At the same time, the staff should inform the initial password and print the registration certificate. In the payment module, the staff can query the medical record number provided by the patient, complete the inspection items, charge the prescription, and print the charging voucher and the prescription medication voucher. In the nursing module, the staff can verify the nursing consumption according to the group purchase coupon code provided by the user. 1. Registration module The staff can register the patients registered on the spot. The patients register with their ID cards. The patients who visit for the first time will automatically register the medical card. At the same time, the staff should inform the initial password and print the registration certificate. 2. Payment module The staff can query the medical record number provided by the patients, and complete the examination items, the charge of the prescription, the printing of the charging voucher and the prescription taking voucher. 3. Nursing module The staff can verify the nursing consumption according to the group purchase coupon code provided by the user. 3.5
Doctor Operating System
Doctors can log in to their own operating system to view today’s patients, establish medical record files, issue examination items, add medical record information and prescribe prescriptions. When using the doctor operating system, you can view the past medical records and medical records of all patients. For your own patients, you can use the return visit function to send health greetings to patients. Doctors need to deal with the online consultation and medication consultation to be answered in time, feedback the information to the patients in the form of e-mail in time, and can view their past inquiry records. 1. Visit module. Doctors can log in to their own operating system to view today’s patients, establish medical record files, issue examination items, add medical record information and prescribe prescriptions. 2. Patient module. Doctors can view the past medical records and medical records of all patients. For their own patients, they can use the return visit function to send health greetings to patients. 3. Inquiry module. Doctors need to deal with the online consultation and medication consultation to be answered in time, feedback the information to the patients in the form of e-mail in time, and can view their past inquiry records.
Implementation of Stomatological Hospital Information
3.6
607
Pharmacy Management System
Pharmacy staff can add, delete, modify and check the drugs in the hospital drug storehouse. According to the drug taking credentials provided by the patients, the staff needs to complete the drug taking operation by virtue of the prescription sheet found. For the drug orders of smart pharmacy, pharmacy staff handles the orders according to the hospital self-pick-up and mail express. 1. Drug management. Pharmacy staff can add, delete, modify and check the drugs in the hospital drug storehouse. 2. Drug taking management. Pharmacy staff can complete the drug taking operation according to the drug taking credentials provided by patients. 3. Order management. Pharmacy staff manages orders according to the hospital selfcollection and express mail. 3.7
Administration System
The system is the hospital administrators to maintain the daily operation of the hospital. Including department management, doctor management, doctor on-the-job time management, pricing management, group purchase project management, guidance management, push management and other aspects of maintenance operation.
4 Software Quality Attribute Strategy 1. Availability: consider the impact when the system encounters errors, attacks or high loads, and the time that the system can work normally [5]. Stomatological hospital information platform adopted ASP.NET The provided exception handling mechanism captures and handles all exceptions. When an error or exception occurs in the system, the detailed description of the exception will be recorded in the log file of the computer. The strategy of throw + exception + handling is used to improve the usability. 2. Modifiability: Considering maintainability and extensibility, when a module of the program needs to be modified or added with new functions, the impact on other modules and the cost of modification. In the development, object-oriented design, combined with MVC and three-tier architecture, the code is layered and the business logic is split. When connecting with we chat applet data, both sides of the interface connection are independent to improve the modifiability. When using SQL database system, the most common problem is injection problem. In order to solve such problems, the system adopts two methods: one is to use parameterized SQL commands or just use stored procedures to query and access data, instead of dynamic assembly commands [6]; the other is to check and convert the user’s query input. These two methods can solve the problem of database injection to a great extent. 3. Security: the system access the information layer by layer, transfer the data and store the data using the symmetric encryption DES algorithm. Is now the most commonly used encryption algorithm, suitable for encrypting a large number of
608
Y. Li et al.
data occasions, fast, standard encryption method, using the strategy to maintain data confidentiality to enhance security? 4. Ease of use: the overall design of the system is easy to understand, learn, use and attract users. The bootstrap front-end framework is used to design the simplified page and adapt to multiple platforms. Each system is convenient for users to use on the page, and the step-by-step guidance is clear [7–9]. After the black and white box test and interface test, the test coverage reaches 100%, without obvious defects. The quality attributes of usability, modifiability, safety and usability are fully considered in the informatization platform of Stomatology Hospital. In the aspects of system error, system modification and maintenance, data security and user interface usage, corresponding strategies are adopted to ensure software quality attributes. In addition, in order to improve the response performance of the system, the data cache technology is used, which not only reduces the system performance loss caused by database operation, but also greatly improves the response performance of the system [10].
5 Conclusion For the design, development and maintenance of software, reasonable software architecture is very important. In the development process of stomatological hospital information platform, combined with the advantages of MVC and three-tier architecture design, formed a multi-layer software architecture style, and used the data-centric warehouse system style and object-oriented architecture style. From the oral hospital official website, personal center, doctor operating system, outpatient payment system, pharmacy management system, administrative management system, we chat applet and other systems, the problems of oral hospital in health service, intelligent operation, medical service, medication system, internal management and social image are solved from the source. The software quality attributes of usability, modifiability, security and ease of use are improved by various strategies to optimize the software quality. Acknowledgements. This project has won the first prize of Liaoning University Students’ Computer Competition in 2020, the second prize of National Computer Competition in 2020, the first prize of Northeast wechat program competition area in 2020, and the second prize of national wechat program competition in 2020.
References 1. Emuoyibofarhe, J.O., Adewuyi, K.K., Amusan, E.A.: Development of an improved electronic patient health record management system with speech recognition. J. Adv. Math. Comput. Sci. 29(6), 1–15 (2018) 2. Zhang, Y., Zhu, X.: Be based on ASP.NET research and implementation of mine geological environment information system based on MVC framework. Urban Geol. 1(1), 97–102 (2020). (in Chinese)
Implementation of Stomatological Hospital Information
609
3. Reenskaug, T.M.H.: Personal programming and the object computer. Softw. Syst. Model. 19 (4), 787–824 (2020) 4. Iba, T., Mori, H., Yoshikawa, A.: A pattern language for designing innovative projects: project design patterns. Int. J. Entrepren. Small Bus. 36(4), 491–518 (2019) 5. Mehlawat, M.K., Gupta, P., Mahajan, D.: A multi-period multi-objective optimization framework for software enhancement and component evaluation, selection and integration. Inf. Sci. 523, 91–110 (2020) 6. Jing, Y., Ahn, G., Zhao, Z., et al.: Towards automated risk assessment and mitigation of mobile applications. IEEE Trans. Dependable Secure Comput. 12(5), 571–584 (2015) 7. Han, Q.: Inventory management system based on bootstrap framework. Int. J. Comput. Eng. 5(1), 156–162 (2020) 8. Mu, E., Kirsch, L.J., Butler, B.S.: The assimilation of enterprise information system: an interpretation systems perspective. Inf. Manag. 52(3), 359–370 (2015) 9. Haghighathoseini, A., Bobarshad, H., Saghafi, F., et al.: Hospital enterprise architecture framework (Study of Iranian University Hospital Organization). Int. J. Med. Informatics 114, 88–100 (2018) 10. Liu, F., Pan, W., Xie, T., et al.: PDB: a reliability-driven data reconstruction strategy based on popular data backup for RAID4 SSD arrays. In: Algorithms and Architectures for Parallel Procession, pp. 87–100 (2013)
Intelligent Classroom System Based on Internet of Things Technology Xia Wu(&), Yue Yang, Xulei Yu, and Chonghao Zheng School of Information and Control, Shenyang Institute of Technology, Shenyang, Liaoning, China [email protected]
Abstract. This paper introduces the design of intelligent classroom system based on Internet of things technology. The intelligent classroom system adopts the Internet of things technology, sensor technology, wireless communication technology and intelligent control technology, which mainly realizes the functions of the statistics of the number of students in the classroom, automatically turn off the lights, automatically switch on and off the air conditioning, detect abnormal doors and windows, and central control, etc. Firstly, this paper analyzes the research background and demand analysis of the intelligent teaching building system, and then designs the functional modules of the system. Finally, according to the required functions and performance requirements, the hardware selection and design are carried out, and the key technologies adopted in the system are analyzed, and the system functions are realized. Through the test, the system runs well, can achieve the expected goal, and has good application value. Keywords: Internet of things technology technology
Intelligent classroom Sensor
1 Introduction With the development of the times, the teaching environment and teaching means are constantly changing. The maturity of Internet of things technology provides strong support for the reform of school teaching environment, and intelligent teaching building emerges as the times require. Intelligent classroom is a new concept, which can provide a better environment for future intelligent teaching [1]. Intelligent classroom integrates multiple functions and promotes the pace of smart campus construction [2]. Teaching environment has an obvious impact on students' learning effect. A good environment can improve teaching level, enhance students' mastery of knowledge, and make classroom teaching more vivid and wonderful [3, 4]. The intelligent classroom realizes the comprehensive monitoring and management of the equipment and environment in the classroom through the classroom automation system, so as to create a comfortable, safe, efficient and convenient teaching and learning living environment for teachers and students. By optimizing the operation and management of the system, energy conservation and emission reduction can be achieved [5]. Therefore, this paper designs an intelligent classroom system based on Internet of things technology. The system uses the infrared sensor to detect the number © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 610–616, 2021. https://doi.org/10.1007/978-981-33-4572-0_88
Intelligent Classroom System Based on Internet of Things Technology
611
of people in the classroom, uses the light sensor to measure the illumination in the classroom, and uses the temperature sensor to measure the temperature of the classroom; after the Arduino microcontroller obtains the number of people, illumination and temperature in the classroom, according to the algorithm set in the program, intelligently controls the operation of lights and air conditioning in the teaching building; Then the information in the teaching building is sent to the display screen or PC through the data transmission module, so as to realize the centralized monitoring of the use of the teaching building and classroom. According to the actual situation of the classroom, the system can intelligently control the operation of lights and air conditioning in the classroom. It can achieve the purpose of saving electricity. It can also detect the situation and the number of people in the classroom, so that we can better understand the situation of the classroom and teaching building.
2 System Function Design According to the analysis of the intelligent classroom system, the design and development of the system is designed through the following aspects. First of all, it is necessary to observe the whole teaching building, find out the problems existing in the teaching building, and establish solutions according to the existing problems. Then the group members discuss whether the scheme is reasonable or not, after being confirmed by the instructor, the final plan is worked out. Secondly, some practical situations should be considered in the design, such as the price of sensor, practicability; selection of single chip microcomputer and wireless transmission technology, etc. This can make the system more stable. Finally, through the continuous testing of the system model, the possible problems of the system are found, and the system is optimized continuously to make the system stable and safe. The overall design goal of the intelligent classroom system based on the Internet of things technology is: the mobile control terminal can detect all kinds of information needed in the classroom in real time; the mobile terminal can control the lighting in the classroom, that is, remote control of lighting and air conditioning; the system is relatively stable, which can effectively reduce the power and other losses. According to the realization goal of the system, the functional module diagram of the system is shown in Fig. 1. The functions of the system are as follows: (1) The function of counting the number of people in the classroom The system mainly uses infrared radiation sensor to judge the number of people entering or leaving the classroom through the level jump. (2) The function of automatic light off This function is mainly corresponding to when there is no one in the classroom, the light is still on, causing a waste of electric energy. The system will judge the current number of people in the classroom, when the number is zero, the lights will be turned off automatically. (3) The function of automatic switch air conditioner
612
X. Wu et al.
Fig. 1. System function module diagram
(4)
(5)
(6)
(7)
The system collects the temperature and humidity data in the classroom through the temperature and humidity sensor, and judges whether to switch on or off the air conditioner by analyzing the preset value of the software, so as to realize the function of automatic control of air conditioning. The function of detecting abnormal doors and windows The setting of this function is mainly used in the sensitive time period, when the door or window opens or closes abnormally, the security personnel can be informed in time. The function of safety monitoring for valuable equipment The system can monitor the more valuable items in the classroom, mainly the experimental equipment, and attach magnetic stripe to the valuable equipment to realize the monitoring. When valuable equipment is taken out of the classroom or a specific location, an alarm is issued and recorded. The function of classroom situation display The system monitors the data through sensors and displays the data on the LCD outside the door. In this way, the course inspectors and supervisors can easily understand the situation in the classroom. The functions of central control
This function can control the lighting and air conditioning of the whole classroom in the monitoring room or the control room, and can also view the information status of the classroom. The centralized control classroom can be realized through wireless transmission technology. The problems to be solved are as follows: (1) It is necessary to solve the stability of sensor monitoring data, whether abnormal data will be detected due to special weather or reasons. (2) It is necessary to solve the problem of the order of use when multiple sensors are used together.
Intelligent Classroom System Based on Internet of Things Technology
613
(3) It is necessary to solve the problem of data transmission mode when transmitting data. The innovation of the project includes the following contents: (1) The classroom often has no one but the lights are on, which is a great waste of electric energy; this system can solve this problem well. When the system detects that the number of people in the classroom is zero, it will automatically turn off the indoor lights, reducing the power consumption. (2) The system can ensure the safety of teaching equipment by detecting whether the equipment has been moved and the distance moved, as well as the abnormal opening and closing of doors and windows in the sensitive period of time. (3) In terms of personnel management optimization, the project has a display screen in the building. The display screen can display the course name, professional class, teachers, attendance rate, and environmental data (indoor temperature and humidity, illumination, carbon dioxide concentration, etc.) collected by sensors in the classroom. In this way, it is more convenient for the personnel to check the class.
3 System Design and Implementation 3.1
Key Technologies
The design of intelligent classroom system based on Internet of things technology uses knowledge and key technologies in various fields, including Internet of things, sensor technology, MCU technology, remote communication technology, mobile application terminal development technology and so on. The more important technology involved is the use of TCP protocol between wireless communication module and mobile terminal software. This system uses socket communication of TCP protocol, through the remote control of the sensor on the mobile terminal and data transmission, Arduino MCU processes the data when sending data to the wireless communication module. At command communication mode is adopted between the wireless communication module and Arduino, and the related algorithm is used to control the sensor [6]. The basic syntax of Arduino is developed based on wiring, which is the secondary encapsulation of avrgcc library, after a simple study, that is, it does not need too much microcontroller foundation, or too much programming basis, but also can carry out rapid development. Arduino's open hardware schematic diagram, circuit diagram, software and library core files are open source, and the original design and corresponding code can be modified at will within the open source scope of the protocol [7, 8]. 3.2
Hardware Selection
According to the functional requirements of the design and through investigation, the relevant hardware is selected, including Arduino development board, LCD, ultrasonic
614
X. Wu et al.
sensor, temperature and humidity sensor, LED lamp, infrared transmitter, infrared receiver, light sensor, flame sensor, photosensitive resistance, infrared radiation sensor, bread board, etc. Here are a few hardware information. Arduino is a convenient and flexible electronic platform. It contains hardware (Arduino board of various models) and software (Arduino IDE). It can sense the environment through a variety of sensors. The microcontroller on the board can be programmed by Arduino programming language, compiled into binary files, and burned into the microcontroller. The temperature and humidity sensor selected in this system is a temperature and humidity composite sensor with high security and strong stability, which has been highly calibrated and developed, this kind of sensor has been highly digital and costeffective [9]. The temperature and humidity sensor used this time includes a NTC temperature measuring element, in addition, there is a resistance type humidity sensor, in addition, the high combination of the two components and the connection with a high performance 8-bit STC series single-chip microcomputer ensures the integrity and efficiency of the data, and each coefficient calibrated is stored in the single-chip microcomputer in the form of a program [10]. This preserves the integrity and practicality of the data, and can be well sealed for reuse. 3.3
Function Realization
This system has achieved the predetermined goal; this paper mainly introduces the following function realization effect. The function of automatic light off: this function comes from daily study and life. In our daily life, we often find that the empty classroom is still full of lights, which causes a great waste of electric energy. Therefore, this system designs a function that can automatically turn off the lights when there is no one in the classroom, and it can be centralized control. The results are shown in Fig. 2.
Fig. 2. Comparison picture of automatic light off effect
Intelligent Classroom System Based on Internet of Things Technology
615
The function of detecting abnormal doors and windows: at night, when the students leave the classroom after class, the safety problems of teaching equipment in the classroom appear. The system can automatically alarm in the sensitive time period. If someone breaks into the classroom in the sensitive time period, the alarm will sound automatically, and the mobile phone will receive the alarm information to achieve the effect of alarm. The results are shown in Fig. 3.
Fig. 3. Effect picture of door and window alarm on mobile phone
4 Conclusions This system realizes the intelligent management and control of the classroom, solves or optimizes the phenomenon of power waste in the classroom, the safety problems of indoor equipment, as well as the inspection of class conditions and indoor environmental conditions. It realizes the functions of automatic light switch control, automatic air conditioning switch, automatic control of classroom doors and windows, and detection of classroom information. Through simulation and test, the system runs well and has certain application and research value. Acknowledgements. The work was supported by Shenyang Institute of Technology. The project comes from the innovative training project of Shenyang Institute of technology.
References 1. Shan, Q.: The construction of internet of things helps the development of mobile media. News Res. Guide 7, 235 (2015) (in Chinese)
616
X. Wu et al.
2. Yu, X., Cheng, W.: From advantage perspective exploring collaborative development between smart classrooms and traditional classrooms. Sci. Innov. 7(1), 1–6 (2019) 3. Lu, T., Zhao, S.: Reform of intelligent classroom based on “embodied cognition” model. Int. J. Intell. Inf. Manag. Sci. 8(3), 369–374 (2019) 4. Hsu, C.C., Chen, H.C., et al.: Developing a reading concentration monitoring system by applying an artificial bee colony algorithm to E-Books in an intelligent classroom. Sensors 12(10), 14158–14178 (2012) 5. Zhu, Z.M., Xu, F.Q., Gao, X.: Research on school intelligent classroom management system based on internet of things. Procedia Comput. Sci. 166, 144–149 (2020) 6. Krieger, W., Bayraktar, E., Mierka, O., et al.: Arduino-based slider setup for gas-liquid mass transfer investigations: Experiments and CFD simulations. AIChE J. 66(6), 1–3 (2020) 7. Xu, C.: Design and implementation of intelligent greenhouse system based on STM32. Int. Core J. Eng. 6(7), 340–345 (2020) 8. Siddika, A., Hossain, I.: Monitoring and alarm system for liquefied petroleum gas leakage based on Arduino. J. Res. Sci. Eng. 2(3), 38–41 (2020) 9. Liu, J.H., He, Y.T., Peng, Y.H.: The design and implementation of monitoring system of flue-cured tobacco barn based on ARM7. Appl. Mech. Mater. 2658, 1753–1758 (2013) 10. Liu, X., Yin, H., Zang, C.: Design of intelligent watering system based on STM32. Acad. J. Eng. Technol. Sci. 2(1), 153–156 (2019)
Coupling Model of Regional Economic System Design Based on Big Data Technology Huan Jin(&) Shenyang Institute of Technology, Fushun, Liaoning, China [email protected]
Abstract. The regional economic system lacked accurate calculation of data and reasonable allocation of internal resource requirements in the market operation. For this reason, the design of the regional economic system coupling model based on big data technology was carried out. On the basis of the analysis of regional economic coupling innovation development, the hardware design of regional economic system focused on the standard processing and assignment of coupled raw data to determine the index weight; In the software design, the coupling degree model of the interaction force between several systems was established by comparing the coupling definition in physics and the size of the capacity coupling coefficient. The research proves that from the perspective of time, the coupling fitness of regional economic systems was increasing year by year. However, the overall goal of coupling was small; from a spatial point of view, the coupling fitness had a clear spatial inconsistency distribution. Keywords: Big data Regional economic system Industrial cluster Coupling coordination degree
Coupling model
1 Introduction During the “13th Five-Year Plan” period, harmonious development has already become the core content of China's current economic development goals. For the first time, from the perspective of strategic objectives, China has formulated the development goal of the regional economy of “developing regional harmonious innovation pattern and further optimizing innovation space”. As the backbone and the first driving force in the construction of the innovation and development system of China's economic level, the regional economy is coordinated with the regional economic environment. All of this is conducive to the realization of the unity of technological progress and economic development. At the same time, it is also the implementation of the “13th Five-Year Plan” specific plan to fully realize the development goals of China's construction of innovative countries, and is conducive to the coordinated development of the region [1]. However, due to the existence of differences in resource richness, policy implementation, science and technology, etc., the coordinated development of regional economic systems has also shown an imbalance between regional developments [2]. It is shown in Fig. 1. Therefore, it is necessary to understand how the regional economies generate driving forces, and how the magnitude of the force affects the specific distribution of © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 617–624, 2021. https://doi.org/10.1007/978-981-33-4572-0_89
618
H. Jin
Fig. 1. Regional economic operation process
Fig. 2. Regional economic and ecological role map
the coupling space between the regions. And correspondingly a solution is proposed for the coordination and coordination between regions, which can promote the connotation between regional economies, promote the region to further achieve innovation and progress, and gradually transform to the development of connotation strength. It is conducive to the harmonious development of regional economies and the creation of coordination effects.
Coupling Model of Regional Economic System Design
619
2 Hardware Design of Coupling Model of Regional Economic System Based on Big Data Technology 2.1
Coupling Raw Data Standardization Processing
The mutual convergence of regional resource elements within a certain time and space is rooted in the specific environmental conditions within the region and closely related to the characteristics of the region. As an intermediate industry organization between the company and the market economy, its aggregation effect, divergence effect and environmental effect can produce a kind of “time and space boost”, which makes a confluence area with obvious advantages between resource elements [3]. It is shown in Fig. 2 for details (Fig. 3). Start
Receiving acquisition orders
Encapsulated into packet format
Send instruction, record time
Receiving wait N Complete reception of packets
Y
Get the data and close the timer
N Whether the timeout flag is displayed
Data processing, return to the upper computer display
Y Turn off timer
End
Fig. 3. Coupled raw data flow chart
620
H. Jin
Once a company or company in an industry causes such a phenomenon within a region, the upper and lower industrial chains or companies that have a complementary relationship or even a competitive relationship will continue to gather in time and space. This high degree of convergence is related to the company industry and the shared areas of certain industrial structures, including the natural environment, economic resources, and social public facilities. Among them, the social public environment covers the regional traffic environment, and the industrial economy and strong full effect produced by it [4] will drive the economic development of the regional economic system as a whole. According to the relevant theory of regional economic development in Europe and the United States, when a company chooses a geographical location at the beginning of its business, it is likely to be in an accidental choice. However, whether the regional positioning at the beginning of the enterprise is a contingent choice or a reasonable and scientific choice made under comprehensive consideration, it will result in the “regional dependence” generated by the company in the original region. For any related item, it must be satisfied: dv ¼
X
M þ n ð m 1Þ
ð1Þ
D is the processed index coefficient, v is the original index coefficient, M is the processed maximum index coefficient, and n is the representative coefficient order index. The index under the condition is satisfied by the formula, and the change of the line condition can be achieved. 2.2
Assignment Determines Indicator Weight
The weight is the specific allocation amount of the different performance levels of the objects being used. In fact, it is to compare the content of each indicator and the contribution value of each industry management to its work content. Reasonable scheduling weight is a key indicator for achieving evaluation [5]. Nowadays, the means of weight assignment is quite rich. There are AOP method, ambiguous comprehensive positioning method, Delbuffy method, maximum deviation method, entropy change method, variation method, and mean difference method. The basic formula is shown in formula 2. p ¼ ð r 1Þ
m X
rj
ð2Þ
j
Among them, the entropy method is the best means of correlation with the coupled model, and the basis is derived from the source data in the subjective environment. The index coefficient is determined by the relationship between the indicators and the maximum and minimum values of the data content that the indicator can provide [6]. The starting point is to reflect the important influence of the weight of the coefficient in the whole evaluation system according to the similarities and differences between the observation points of a certain index. See Eq. 3 and Eq. 4.
Coupling Model of Regional Economic System Design
s ¼ k
1 X
pj ln p
j
w¼X
X3 r¼2;4;6;8;:::
621
ð3Þ
pl
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Xm ffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ksM ð1 sÞ ¼ M M s l
ð4Þ
This kind of law avoids the error caused by subjective psychology in a certain scope, and is a scientific weighting method [7]. Specifically, it should be divided according to the level of coordinated development, as shown in Table 1. Table 1. Coordinated development level classification Development type
Economic backwardness
synchronized model
grade degree of coupling
0.01 0.987–4.134
0.011 0.41–0.98
Coordination type Maladjusted recessionary class General maladjustment recessionary class Reluctantly coordinated class Excessive development category
H of the same class Second class
I of the same class Second class
0.234–0.564
0.45–0.98
outstanding
speed-up
Hightech region 0.21 5.12– 9.87 H Second class Second class 0.11– 0.90 speed-up
High-tech backward areas 0.09 6.21–9.00 I Second class Second class 0.11–7.10 same as
3 Software Design of Regional Economic System Coupling Model Based on Big Data Technology The key to changing the regional economic system from disorder to order is the magnitude of the interaction between its systemic participation values. The degree of coupling is precisely the magnitude of the degree of use that describes the interaction between the components of the system, and it is accurately measured. By relating the coupling definition in physics and the size of the coupling coefficient of the volume, the coupling degree model of the interaction force between several systems is established. The aggregation effect and divergence effect of industrial collection are one of the important manifestations of the industry's competitiveness, and it is the expression of the regional economic development system. Therefore, the variable coupling of the industry group and the regional economic development system can be realized from several specific performance contents. For example, strengthening the joint reform of group enterprises is conducive to giving play to the advantages of regional advantages and resource elements, highlighting the concentrated advantages of excellent brands,
622
H. Jin
and thus strengthening the fundamentals of the group's regional economic environment factors; Furthermore, it is a series of systems for improving and supporting the development policies of outstanding groups and related laws and regulations based on the development content as a whole for the regional economic environment. It is necessary to strengthen the related construction of large-scale public infrastructure equipment, continue to increase investment and reform of education funds, and deliver high-quality talents for enterprises; The third is to cultivate innovative networks and improve independent innovation systems and regional innovation systems (Fig. 4).
0.5 Sodium hydroxide solution mm-1
0.4 0.3 0.2 0.1 0 3
6
9
12
15
18
Number of experimental parts (units)
mineral chameleon
potassium iodide
fluoride
Fig. 4. Comparison of pull force
Once the industrial space agglomeration is established, it tends to continue itself and deeply embedded in all aspects of the regional economy. The economic actors of industrial clusters cannot be separated from the regional social network structure, have the same or similar cultural and institutional backgrounds as the regional economic system, and even have common social and economic development goals. Industrial clusters continue to enhance the competitiveness of the regional economy in the process of achieving their own upgrades, and the prosperity and development of the regional economy also provides the basis and conditions for the industrial cluster level, and the two promote each other.
Coupling Model of Regional Economic System Design
623
4 Experimental Results and Analysis In order to ensure the effectiveness of the research on the coupling model of regional economic system based on big data technology, this experiment is carried out. Now, based on the big data technology, the regional economic system coupling model experiment selected two different types of economic regions. In the experiment, the two economic regions were placed in the same environment to observe the regional economic development and changes in different time periods, and record the data at any time. The schematic diagrams of the experimental demonstration results are shown in Fig. 5.
1200 conventional method 1000
Coupling tension
800
600
Paper method
400
200
0
300
600
900
1200
1400
1600
Data scale (MB)
Fig. 5. Experimental comparison chart
According to the chart, when the market mechanism is difficult to produce an effective role, the intervention of economic systems and government policies on industrial clusters and regional economic systems will often receive satisfactory results. China is currently in the period of economic system transformation. In the process of transformation, there will inevitably be many problems that cannot be solved by the market alone. In the process of coupling the development of industrial clusters and regional economic systems, it is necessary to formulate relevant economic development strategies through the government, introduce social and economic reform measures, and achieve an effective substitution of government economic development strategies and policies for certain market adjustment processes to a certain extent.
624
H. Jin
5 Conclusion This paper mainly studies the coupling model of regional economic system based on big data technology. It can use Internet technology to effectively obtain the coupling degree of effective regional economic system from the era of big data. Identifying the regional economic system coupling model to help us obtain more valuable regional data economic information, and to develop and utilize the regional economy to improve promising data resources. Through the research and analysis of this paper, we can know that the regional economic system coupling model of big data technology has far-reaching significance. Although there have been some research advancements in the recent regional economic system coupling model of big data technology, there are still many research gaps waiting for us to explore and explore. To this end, we must face the difficulties, go forward, and constantly improve the regional economic system coupling model of big data technology, research progress to obtain effective resource data information, and thus better promote the relevant coupling model of China's regional economic system.
References 1. Yamis A., et al.: Sustaiiiabilitv: an ill-defined concept and its assessment using fuzzylogic, logical Economics. 2001(37), 435–456 (2001) 2. Van Der Zwan, F., Bhamra, T.: Services marketing: taking up thesustainable development challenge. J. Serv. Mark. 17, 341–354 (2003) 3. Linda, C., Klassen, R.D.: Integrating environmental issues into themainstream: an agenda for research in operations management. J. Oper. Manage. 17(575), 598 (1999) 4. Stockhammer, E.: The index of sustainable economic welfare (ISEW) asan alternative to GDP in measuring economic welfare: the results of the Austrian (revised) ISEW calculation 1955–1992. Ecol. Econ. 21(1), 16–20 (1997) 5. Minna-Liisa, R., Minna, T., Rauni, R.: Lead contamination of an oldshooting range affecting the local ecosystem: A case study with a holistic approach. Sci. Total Environ. 369(1–3), 99– 108 (2006) 6. Juergen K, F., Otto, E., Michael, P., et al.: Soil microbial biomass and activity: the effete of site characteristics in humid temperate forest ecosystems. J. Plant Nutr. Soil Sci. 169(2), 175–184 (2006) 7. Davos, C.A., Siakavara, K., Santorineou, A., et al.: Zoning of marine protected areas: conflicts and cooperation options in the Galapagos and San Andres archipelagos. Ocean Coast. Manag. 50(3–4), 223–252 (2007) 8. Shi, C., Hutchinson, S.M., Yu, L., et al.: Towards a sustainable coast: an integrated coastalzone management framework for Shanghai, People’s Republic of China. Ocean Coastal Manage. 44(5–6), 411–427 (2001) 9. Kullenberg, G.: Contributions of marine and coastal area research and observations towardssustainable development of large coastal cities. Ocean Coastal Manage. 44, 283–291 (2001) 10. Dazhao, S., Enyuan, W., Nan, L., et al.: Rock burst prevention based on dissipative structuretheory. Int. J. Mining Sci. Technol. 22(2), 159–163 (2012)
International Intelligent Hospital Information System Based on MVC Lifang Shen(&), Hexuan Liu, Wangqiang Tian, and Shuai Zhang School of Information and Control, Shenyang Institute of Technology, Shenyang, China [email protected]
Abstract. This paper introduces the design and implementation of international intelligent hospital information system. The Internet and digitization have brought about disruptive changes in many industries, including health care. This project combines to build an international business environment, to promote the construction of smart hospital, to improve the application of information system in the process of hospital diagnosis and treatment, and to provide patients with efficient and high-quality medical services. System through the investigation into the hospital after get the user requirements, and use the core Asp.net MVC framework and the Entity framework Entity framework developed, finally through the system test, for the hospital information management provides a visual management interface, easy to use and manage the basic information of the hospital patients, and on the stages in the medical activities of data acquisition, storage, processing, summarizing, form all sorts of information data processing, the desired goal. Keywords: Intelligent hospital
Information system ASP.NET core
1 Introduction With medical industry competition, the market has become increasingly saturated, the defects of extensive management increasingly exposed, cause a downturn of the medical industry profits of different level, to meet the personalized requirements of the medical industry customers, adapt to the development of the future, we have to a healthcare system software management system, provide the overall solution of medical industry informatization; Information is the foundation of a century-old industry. In foreign countries, most hospitals are privately operated and tend to be commercialized. The medical treatment generated by this model has almost become a service industry. The high cost of treatment is linked to the salary of hospital service personnel, thus realizing the tenet of “customer is God”. However, through field investigation in major hospitals in China, it is found that queues are common in major hospitals in China, resulting in a large number of patients piling up, and some patients jump the queue, thus leading to a general argument among patients [1]. This system is an international intelligent hospital information system designed based on ASP.NET Core, which is mainly oriented towards internationalization. From the perspective of the new round of medical reform, the country, hospitals and software © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 625–631, 2020. https://doi.org/10.1007/978-981-33-4572-0_90
626
L. Shen et al.
companies have invested a lot of manpower and material resources to solve the problem of medical informatization in China. However, there are still queues in China that cannot be solved in time, so we designed the international Smart Hospital information system, where patients can receive one-to-one services from making an appointment by phone to seeing a doctor. In addition, the data generated in various stages of medical activities are collected, stored, processed and summarized to form various information data, so as to provide comprehensive information management for the overall operation of the hospital.
2 Existing Technology of the Project The development tool used in this platform is Microsoft Visual Studio 2019. The technology used is ASP.NET Core MVC, an open source and cross-platform framework for building modern cloud-based applications connected to the Internet such as WEB applications, and mobile back-end applications. Asp.net core applications run on top of. Net and Net framework. It consists of modular components with minimal overhead, so you can maintain flexibility while building your solution. Asp.net Core is no longer based on system. web.dll. Currently it is based on a series of granulated and well-built NuGet packages. MVC as the name suggests: Model, View, and Controller. Is a three-tier pattern of UI side layering? It is fundamentally different from the three-tier architecture. ASP.Net MVC completely separates the front end from the back end, as well as the abstraction layer structure of dependency injection, crosscutting programming patterns [2–5]. It also uses the entity framework, a set of technologies in ado.net that support the development of data-oriented software applications, and an ORM framework from Microsoft. Implementing an object-oriented data access interface for a model and persisting an object-oriented model should not require understanding the implementation details of storing data in a relational database.
3 System Function Design This project is divided into different permissions according to different roles of patients, including administrator, PA, toll collector, doctor, pharmacist, pharmacist director and department director. The administrator has the authority of department management, clinic management and staff management. PA has the authority to record the use of patient management, appointment management and clinic; Toll collectors can collect fees and check historical records; Doctors can check schedules and make outpatient visits; The pharmacist checks the kitchen where the doctor has been fired; The pharmacist is responsible for the maintenance of drugs; The department director holds the doctor's information management, arranges the doctor's schedule and maintains the patient's information. Finally, the data generated in various stages of medical activities are collected, stored, processed and summarized to form a variety of information, so as to provide a comprehensive automatic management and information system of various services for the overall operation of the hospital.
International Intelligent Hospital Information System Based on MVC
627
According to the realization goal of the system, the functional module diagram of the system is shown in Fig. 1.
Chief Physician
Scheduling Management
Scheduling Settings
Clinic Management
Clinic Setting
Doctors Station
Treatment
Check
Dosage Pharmacist
Pharmacy Verify
International Intelligent Hospital Information System
Distribute
Chief Pharmacist
Statisticians
Pharmaceutical Warehouse Management
Pharmaceutica l Management NonPharmaceutica l Management
Basic Pharmaceutical Infomation
NonPharmaceutica l Setting
Pharmaceutical Taboo Setting
Drug Consumption Query Paid Entries
Cashier
Charging Management
Query Unpaid Entries Charging
Fig. 1. System function module diagram
3.1
Administrator Module
In the administrator module, the administrator centralizes the management of people, role management, department of personnel management, constant management, personnel management mainly to add, as well as add the role of the view function, role management will add personnel distribution of different roles, role responsibilities are different, different department manage the distribution of the major implementation department, create, and also can be in the list view in the department, constant management, mainly containing constant configuration items, such as: gender, drug dose constant information, etc. [6]. 3.2
PA Module
PA module is divided into the nurse station and patient management, PA takes care of the patients in nurse station appointment process, and make an appointment after triage processing functions, to save the patient's information in patient management and view the information function in patients with modified, triage of patients' information will
628
L. Shen et al.
be transmitted to the corresponding doctor the doctor interface, mainly be responsible for the reception of patients, to make an appointment to the patients, triage and management of the patients [7]. Appointment page is shown in Fig. 2.
Fig. 2. Appointment Page
3.3
Doctor Module
The doctor module is divided into doctor stations. In the doctor station, doctors are mainly responsible for checking and viewing the list of patients, and checking the patients' visits and doctors' personal scheduling information. Patient information is shown in Fig. 3.
Fig. 3. Patient information
International Intelligent Hospital Information System Based on MVC
3.4
629
Chief Physician Module
The module mainly includes scheduling management, clinic management station, my department, and doctors, by scheduling management, director of the doctor the doctor the corresponding scheduling, the doctor can see in their own interface and scheduling information, is mainly responsible for office management creation and maintenance of clinic, the doctor standing, is mainly responsible for view list of patients will be able to check, and for the patient to see a doctor. 3.5
Pharmacist Module
The pharmacist module mainly includes prescriptions, and is mainly responsible for the audit, issuance, dispensing and other operations of drugs in the pharmacy. During the verification process, the pharmacist needs to confirm twice before the verification is successful, and only after the verification can the drugs be issued. 3.6
Chief Pharmacist Module
The chief pharmacist module is mainly divided into drug management and non-drug management. Drug management is mainly about the setting of drug compatibility taboos, drug store and basic information management of drugs by the chief pharmacist. Non-drug management mainly includes the corresponding patient examination items, such as CT, and the required cost for management. 3.7
Charging Module
The charge management in the charge module is mainly to charge drugs and register fees in the charge list, and also to check the paid information and unpaid information. 3.8
Statistics Module
In the statistics module, you can see the general situation of drugs consumed, which is mainly displayed through Echart view.
4 Software Quality Assurance In order to ensure the quality of software, complete requirements should be obtained first. In the stage of demand analysis of international smart hospital information system, I have done a lot of work, investigated a number of large international hospitals, and communicated with representative users in all aspects of the customer to fully understand and get familiar with the customer's business. And maintain communication and communication with users from requirements to design stage. Keep users' business experts involved in our requirements, analysis and design [8]. Secondly, write a test plan after requirements analysis, and conduct corresponding tests at each stage of development to ensure that the code meets the corresponding requirements. In the process of code writing, each class is unit tested by the program,
630
L. Shen et al.
each function point or module is integrated tested, and each integration test iterates the products that have passed the test last time, that is, those that have been successfully tested before will be added to this test. Make each completed function and module is a working, visible product after completion; Users are also welcome to witness the results of our integration tests. After the code is written, a final integration test is performed, and then the project is systematically tested by an independent test team [9, 10].
5 Conclusions The innovation of this project is mainly for the process of hospital, mainly taking patients as the center for service, we will have someone to the patient's appointment, and seeing patients in person, conducted in patients with a series of treatment process, provide one-on-one accompany and care services, we will have the best quality in the process of medical treatment, after the completion of a patient, we will to record the basic information of the patients, the next time when a patient to see a doctor, we can extract the patient information to use. This system is also equipped with pharmacists to handle the designated dispensing of doctors, as well as the division of the conflict between the two types of drugs, and the distribution of designated drugs. The system will also carry out statistics on hospital managers, chief doctors, and chief pharmacists and doctors of our hospital, and display them in the system. The Internet and digitization have brought about disruptive changes in many industries, including health care. This project is of practical significance to build an international business environment, promote the construction of smart hospital, improve the application of information system in the process of hospital diagnosis and treatment, and provide patients with efficient and high-quality medical services. Therefore, this project is pioneering. Acknowledgements. This project comes from the innovation training project of Shenyang Institute of Technology and won the third prize in the computer Competition for College Students in Liaoning Province in 2020.
References 1. Mehrdad, F., Rangraz, J.F., Esmaeil, A.: Factors affecting successful implementation of hospital information systems. Acta informatica medica : AIM : journal of the Society for Medical Informatics of Bosnia & Herzegovina : casopis Drustva za medicinsku informatiku BiH 24(1), 1002–1006 (2016) 2. Han, Y.: Design and implementation of ancient culture display protection system based on three-layer architecture and MVC design pattern. Inf. Technol. Informatization 7,9–11 (2020). (in Chinese). 3. Sunardi, A., Suharjito: MVC architecture: a comparative study between laravel framework and slim framework in freelancer project monitoring system web based. Procedia Comput. Sci., 157 (2019).
International Intelligent Hospital Information System Based on MVC
631
4. Kamalraj, N.: Artificial bee colony based multiview clustering ABC MVC for graph structure fusion in benchmark datasets. J. Trend Sci. Res. Dev. 4(2), 476–480 (2020) 5. Sunardi, A., Suharjito: MVC architecture: a comparative study between laravel framework and slim framework in freelancer project monitoring system web based. Procedia Comput. Sci. 157, 8 (2019) 6. Khajouei, R., Farahani, F.: A combination of two methods for evaluating the usability of a hospital information system. BMC Med. Inf. Decision Making 20(7), 191(2020) 7. Hu, M., Xu, X., Li, X., et al: Managing patients’ no-show behaviour to improve the sustainability of hospital appointment systems: exploring the conscious and unconscious determinants of no-show behaviour. J. Clean. Prod. 269 (2020) 8. Jiao, L., Xiao, H., Zhu, X., Zhao, X., Jiang, Y.-Z.: Factors influencing information service quality of China Hospital: the case study of since 2017 of a hospital information platform in China. Comput. Math. Methods Med. 2020 (2020) 9. Pourasghar, F., Abdollahi, L.: Use of information in hospitals: a qualitative study. Taṣvīr-i salāmat 10(1), 876–881 (2019) 10. Zhao, L., Liang, X., Hu, X.: Research on the quality assurance method of spacecraft software based on software testing. Sci. Discov. 6(1), 301–304 (2018)
Heterogeneous Network Multi Ecological Big Data Fusion Method Based on Rotation Forest Algorithm Yun Liu(&) and Yong Liu School of Information Engineering, Chaohu University, Chaohu, China [email protected]
Abstract. Nowadays, with the rapid development of economy and science and technology, the ecological environment has been greatly affected. In recent years, the concept of sustainable development has been paid more and more attention, and the protection of ecological environment is becoming more and more important. With the advent of the era of big data, ecological data also presents a trend of diversification. The integration of multi ecological big data is conducive to solving more and more serious ecological problems. Therefore, this paper proposes a multi ecological large data fusion method based on rotation forest algorithm for heterogeneous network. In the study, we introduced rotation forest into data fusion, selected an ecological region as the research area, and selected 500 samples from 2000 samples as test sets. In this paper, the proposed method is verified from three aspects of fusion confidence, overall accuracy and computational time efficiency, and is compared with the other two methods. The results show that the highest fusion confidence of method 1 is 0.8, that of method 2 is 0.65, and that of this method is 0.92. In addition, in terms of overall accuracy and computational efficiency, the proposed method based on rotation forest algorithm has more advantages than the other two methods. Keywords: Rotation forest algorithm ecological big data Data fusion
Heterogeneous network Multi
1 Introduction Ecological big data includes vector data, raster data and other structurally strong data and some unstructured data [1, 2]. Data acquisition is mainly through Internet transmission, manual collection and integration of scientific research results, which can obtain data in different spaces [3]. As we all know, ecological civilization is the foundation of social civilization system, and socialist material civilization, political civilization and spiritual civilization are inseparable from ecological civilization. On the other hand, with the development of social economy, the ecological environment has been damaged more and more seriously. With the advent of the era of big data, ecological data presents a large number of characteristics, and a large number of multiple ecological data emerge as the times require, and the multiple ecological big data makes the processing of ecological data more difficult [4, 5]. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 632–639, 2021. https://doi.org/10.1007/978-981-33-4572-0_91
Heterogeneous Network Multi Ecological Big Data Fusion Method
633
With the increasing amount of data, the processing speed of data is a consideration [6]. If the massive data cannot be processed in time, the information will lose its timeliness, and the value of data is difficult to realize. Data fusion is a data processing method. Using data fusion method to fuse multiple heterogeneous ecological data, researchers can easily see the trend of data and the trend of ecological development more intuitively, and provide solutions for solving the problems of ecological development [7, 8]. The existing data fusion methods have certain defects. In this paper, the rotation forest algorithm is introduced into heterogeneous network multi ecological big data fusion, which improves the performance of data fusion in all aspects, and provides certain reference value for solving the problem of ecological data fusion [9, 10]. In this paper, the rotation forest algorithm is introduced in the data fusion, and the classification accuracy is improved by using the characteristics of the rotation forest algorithm. In the study, we selected 500 test samples as test sets, and verified the proposed method from three aspects of fusion confidence, overall accuracy and computational time efficiency. The comparison between the proposed method and the other two data fusion methods shows that the proposed method has the advantages of high fusion confidence, high precision and short calculation time.
2 Rotating Forest, Heterogeneous Network and Data Fusion 2.1
Rotating Forest
Rotation forest algorithm is an integrated classification algorithm based on feature transformation. In the rotation forest algorithm, firstly, the feature set of the sample is randomly divided, and then the feature subset is transformed and recombined to obtain a new sample, and then participate in the classification of the base classifier. In this way, through the data preprocessing, the difference between the base classifiers is increased, and the classification accuracy is improved. The basic principle of rotation forest algorithm is as follows: Suppose X is the initial training data set (N n dimension), n is the number of training data samples, n is the number of selected classification features; Y is the corresponding sample class label (N 1 dimension) in the training sample set X; F is the feature set; K is the number of feature subsets divided; D1, D2, …, DL denotes L base classifiers, {x1, x2, …, xc} are set of class labels. The construction steps of rotation forest algorithm are as follows: (1) F is randomly divided into K subsets, each subset contains M = n / k attributes. (2) Let Fi,j be the j-th attribute subset and be used for the training of classifier Di. (3) Each feature subset is operated in step (2), and all the principal component coefficients obtained are stored in a coefficient matrix Ri 2
1 a1i1 ; . . .aM i1 6 0 Ri ¼ 6 4 ... 0
0 2 a1i2 ; . . .aM i1 ... 0
... ... ... ...
3 0 7 0 7 5 ... K a1iK ; . . .aM iK
ð1Þ
634
Y. Liu and Y. Liu
(4) According to the order of the original attribute set, Ri is rearranged to form an N n rotation matrix, and a base classifier is selected for classification. The above methods are used L times to form L base classifiers. For each classification sample x, first through training set transformation, then using Di to classify, produce L classification results. Then the confidence degree of x belongs to w class: lx ð xÞ ¼ 1=L
L X
di;j xRai ; j ¼ 1; . . .; c
x ¼ arg maxðlw ð xÞÞ; w 2 C
2.2
ð2Þ
i¼1
ð3Þ
Heterogeneous Network and Data Fusion
Heterogeneous network is a kind of network, which is composed of different manufacturers, network devices and systems. In most cases, it can run on different protocols and support different functions or applications. Data fusion, also known as multi-sensor information fusion, is a research on information processing of a multi-sensor system. In short, data fusion is to process the information from multi-sensor or multi-source, coordinate and optimize these data information, so as to obtain more accurate and reliable data. Data fusion can be divided into three categories: data layer, feature layer and decision level according to the abstract degree of sensor processing level.
3 Experimental Design (1) Sample description. In this paper, a certain ecological region is selected as the research area to carry out the research on multi ecological big data fusion. In this paper, a total of 2000 sample points are determined, including 1500 training samples and 500 testing samples. (2) Experimental parameters. Among the sensor nodes selected in this paper, the transmission radius is 100/m, the data acquisition radius is 20/m, the transmission packet size is 500B/s, the node transmission bandwidth is 100 kb/s, and the number of disjoint routes is 3. (3) Algorithm testing. In the process of experiment, we compare the multi ecological big data fusion method of heterogeneous network based on rotation forest algorithm with the other two data fusion methods, and compare the fusion confidence, overall accuracy and calculation time efficiency to highlight the advantages of this method. In the experiment, we choose a PC terminal, a coordinator node, a routing node and six terminal acquisition nodes.
Heterogeneous Network Multi Ecological Big Data Fusion Method
635
4 Analysis of Heterogeneous Network Multi Ecological Big Data Fusion Method Based on Rotation Forest Algorithm Nowadays, we can get more and more data, and the type of data is becoming more and more complex. A single classifier cannot meet the needs of data classification. Because of its good classification accuracy and stability, the integrated algorithm has become the focus of researchers’ attention. The ensemble algorithm can use multiple weak classifiers with different features for the same target, and synthesize the classification results of each weak classifier, so as to improve the overall generalization performance and stability of the algorithm. In an integrated system, a single classifier is equivalent to an individual in the ensemble classifier, so it is also called base classifier. Rotation forest algorithm is a kind of integrated classification algorithm based on base classifier. The rotation forest algorithm divides the attribute set of samples randomly, and the transformation method used in the segmentation is achieved by linear transformation. This algorithm can transform attribute subsets and increase the difference between subsets. In addition, the rotation forest algorithm can use the transformed attribute subset to select samples, so as to train different classifiers. This algorithm can be used for classification and regression. The existing data fusion methods have the defect of low classification accuracy in classification. In view of the diversified characteristics of ecological data, this paper applies the rotating forest algorithm to the heterogeneous network multi ecological big data fusion, and proposes a multi ecological big data fusion method based on the rotation forest algorithm, and compares it with the other two methods from three aspects of fusion confidence, overall accuracy and calculation efficiency, which highlights the advantages of this method. 4.1
Confidence Analysis of Algorithm Fusion
Compared with the other two data fusion methods, the fusion confidence analysis results of the three methods are shown in Fig. 1. It can be seen from Fig. 1, that the highest and lowest fusion confidence of method 1 is 0.8 and the lowest is 0.21. The highest fusion confidence of method 2 is 0.65 and the lowest is 0.28. The highest fusion confidence of this method is 0.92 and the lowest is 0.32. When the number of samples changes from 250 to 300, the fusion confidence of each method also changes greatly. The fusion confidence level of method 1 increases from 0.28 to 0.71, that of method 2 increases from 0.38 to 0.52, and that of this method increases from 0.45 to 0.86. In addition, from Fig. 1, we can see that the fusion confidence of method 1 is slightly lower than that of method 2. When the number of samples reaches 300, the fusion confidence of method 2 is lower than that of method 1, while the confidence of this method is always higher than that of the other two methods. Therefore, the fusion confidence of heterogeneous network multi ecological big data fusion method based on rotation forest algorithm proposed in this paper has excellent advantages in fusion confidence.
636
Y. Liu and Y. Liu
Method 1 Method 2
1 0.9
Fusion confidence
0.8 0.7 0.6 0.5 0.4 0.3
0.92 0.92 0.86 0.88 0.89 0.80 0.75 0.75 0.78 0.71 0.62 0.65 0.58 0.52 0.54
The method of this paper
0.42 0.45 0.38 0.38 0.36 0.36 0.32 0.35 0.32 0.28 0.25 0.29 0.25 0.26 0.28 0.21 0.22 0.23
0.2 0.1 0 0
50
100 150 200 250 300 350 400 450 500
Number of samples Fig. 1. Comparison and analysis of fusion confidence of three methods
4.2
Overall Accuracy Analysis
We know that in the process of data fusion, it is necessary to classify the samples, and the scale of integration can affect the accuracy of rotation forest classification pc ¼
m X
pkk =N
ð4Þ
k¼1
Among them, pc is the total classification accuracy, m is the number of classification categories, N is the total number of samples, and Pkk is the discriminant sample of class K. We analyze the classification accuracy of the three methods, and the results are shown in Fig. 2. It can be seen from Fig. 2 that when the number of samples is 50, the overall accuracy of method 1 is 70%, that of method 2 is 75%, and that of this method is 82%. When the number of samples is 100, the overall accuracy of method 1 is 76%, that of method 2 is 81%, and that of this method is 85%. When the number of samples is 150, the overall accuracy of method 1 is 78%, that of method 2 is 84%, and that of this method is 87%. When the number of samples is more than 150, the overall accuracy of each method tends to be stable, and the data will not change. From this, we can see that the fusion method of heterogeneous network multi ecological big data based on rotation forest algorithm has high classification accuracy, which is better than the other two methods.
Heterogeneous Network Multi Ecological Big Data Fusion Method
Method 1 100
Overall accuracy/%
90 80 70
Method 2
637
The method of this paper
82 75 70
85 81 76
8487 78
8487 78
8487 78
8487 78
8487 78
8487 78
8487 78
8487 78
1
2
3
4
5
6
7
8
9
10
60 50 40 30 20 10 0
Number of samples
Fig. 2. Classification accuracy analysis of the three methods
4.3
Calculation Time Effect Analysis
Computational time efficiency is also an index to measure the data fusion method. In this paper, the data is divided into four groups, and the data processing capacity is set to 1 GB, 12 GB, 24 GB and 120 GB respectively. The calculation time effectiveness of the three methods is compared. The results are shown in Table 1. Table 1. Comparison of calculation time effectiveness of three methods Method Method 1 Method 2 The method of this paper
1 GB 12 GB 0.023 s 180 s 0.025 s 175 s 0.021 s 170 s
24 GB 420 s 380 s 340 s
120 GB 1500 s 1350 s 800 s
It can be seen from Table 1 that when the data processing capacity is 1 GB, there is no significant difference in the calculation time of the three methods. When the data processing capacity is 12 GB, the time used for method 1 is 180 s, and that for method 2 is 175 s. The time used in this method is 170 s. It can be seen that the time gap of the three methods for data processing of this level is not big. When the data processing capacity is 24 GB, the time of method 1 is 420 s, that of method 2 is 380 s, and that of the method in this paper is 340 s. It can be seen that the time used by the three methods has begun to widen the distance for this level of data processing capacity, among which the time used in this method is the shortest. When the data processing capacity reaches
638
Y. Liu and Y. Liu
120 GB, the time difference of the three methods is relatively large. Among them, the time of method 1 is 1500 s, that of method 2 is 1350 s, and that of this method is 800 s. We can see that the method of heterogeneous network multi ecological big data fusion based on rotation forest algorithm has great advantages in computing time efficiency when the amount of data processing is large.
5 Conclusions With the concept of sustainable development put forward, ecological development has received more and more attention. In addition, with the increase of the amount of information, ecological data tends to be diversified and large. The fusion of heterogeneous network multi ecological big data is conducive to more intuitive observation of the trend of ecological development, and provides solutions to the problems existing in ecological development. Therefore, this paper proposes a heterogeneous network multi ecological big data fusion method based on rotation forest algorithm method is verified from three aspects: fusion confidence, overall accuracy and computational time efficiency. The results show that, compared with the other two methods, the fusion confidence and overall accuracy of this method are higher, and the calculation time is shorter. It can be seen that the method in this paper has advantages. The research in this paper will provide a certain reference value for the solution of heterogeneous network multi ecological big data fusion. Acknowledgements. This work was supported by the key Research project of natural science in Anhui Province (KJ2019 A0681).
References 1. Zhao, F., Zhang, L.Y., Zhao, M.M., et al.: Architecture and technical exploration of big data platform for ecological environment. Chin. J. Ecol. 36(3), 824–832 (2017) 2. Mulder, C., Mancinelli, G.: Contextualizing macroecological laws: a big data analysis on electrofishing and allometric scalings in Ohio, USA. Ecol. Complex. 31, 64–71 (2017) 3. Song, S., Tian, D., Li, C., et al.: Genome Variation Map: A data repository of genome variations in BIG Data Center. Nuclc Acids Res. 46(D1), D944 (2018) 4. Serra-Diaz, J.M., Enquist, B.J., Maitner, B., et al.: Big data of tree species distributions: how big and how good? Forest Ecosyst. 4(1), 30 (2017) 5. Song, M.L., Fisher, R., Wang, J.L., et al.: Environmental performance evaluation with big data: theories and methods. Ann. Oper. Res. 270(1), 459–472 (2018) 6. Caron, F., Duflos, E., Pomorski, D., et al.: GPS/IMU Data Fusion using multisensor Kalman filtering: Introduction of contextual aspects. Inf. Fusion 7(2), 221–230 (2017) 7. Vo, A.V., Truong-Hong, L., Laefer, D.F., et al. Processing of extremely high resolution LiDAR and RGB data: outcome of the 2015 IEEE GRSS data fusion contest—Part B: 3-D Contest. IEEE J. Selected Top. Appl Earth Observations Remote Sens. PP(99), 1–16 (2017) 8. Liao, W., Huang, X., Van Coillie, F., et al.: Processing of multiresolution thermal hyperspectral and digital color data: outcome of the 2014 IEEE GRSS data fusion contest . IEEE J. Sel. Top. Appl. Earth Observations Remote Sens. 8(6), 2984–2996 (2017)
Heterogeneous Network Multi Ecological Big Data Fusion Method
639
9. Luyang, J., Taiyong, W., Ming, Z., et al.: An adaptive multi-sensor data fusion method based on deep convolutional neural networks for fault diagnosis of planetary gearbox. Sensors (Switzerland) 17(2), 414 (2017) 10. Rosa, A.R.D., Leone, F., Scattareggia, C., et al.: Botanical origin identification of Sicilian honeys based on artificial senses and multi-sensor data fusion. Eur. Food Res. Technol. 244(2), 1–9 (2017)
Custom Tibetan Buddhist Ceramics Used by Royalties in the Qing Dynasty Based on Han-Tibetan Cultural Evolutionary Algorithm Jie Xie1,2(&) 1
Art School, Huaiyin Normal University, Huaian 223001, Jiangsu, China [email protected] 2 Dankook University, Yongin 16890, Korea
Abstract. From the perspective of the Han-Tibetan cultural exchange, the social and historical backgrounds of the fine collections of Tibetan porcelain produced by Jingdezhen official kiln in the Ming and Qing dynasties integrating the elements of Han-Tibetan cultures are reviewed. The artistic design characteristics of the modeling and color of Tibetan porcelain are analyzed to appreciate the endowment of Tibetan porcelain to the thoughts of Tibetan Buddhism. Through field investigation, consultation of literature, and comparison of real pictures taken, it is concluded in the study that the “Combination of Sanskrit and Han styles” was the main feature of the Tibetan porcelain produced by Jingdezhen official kiln in the Ming and Qing dynasties. It presented a strong religious theme and artistic aesthetic characteristics of deification, forming a unique Sanskrit art style of Tibetan porcelain. Keywords: Han-Tibetan culture
Jingdezhen official kiln Tibetan porcelain
1 Introduction Human society and nature are t two main forces that shape the landscape structure and drive the landscape hierarchy process. Since two-thirds of the earth's lands were covered by agricultural lands, livestock grazing areas and managed forests, human activities play an essential role in creating landscapes [1, 2]. The vast diversity of ceramics on Earth survives these human-influenced landscapes. In many cases, these landscapes can be deemed as “Tibet Tibetan Buddhist ceramics in the Qing Dynasty”, which represents the essential reserve of the natural and cultural capital [3, 4]. The Tibetan Buddhist ceramics used by royalties in the palace in the Qing Dynasty was a geographical area where the relationship between human activities and the environment had established the Han-Tibetan culture, socio-economic and cultural models, and feedback mechanisms that control the existence, distribution, and abundance of species collections. There were many kinds of Tibetan Buddhist ceramics in the royal court of the Qing Dynasty. However, it has depended on the initial landscape conditions and the culture of a particular period historically. After the establishment of the Qing Dynasty, Lamaism in Tibetan Buddhism was hailed as the state religion [5, 6]. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 640–647, 2021. https://doi.org/10.1007/978-981-33-4572-0_92
Custom Tibetan Buddhist Ceramics
641
Lamaism is also called Huangjiao. Also known as the Yellow Sect, it is Gelug Sect of Huangjiao founded by Zongkapa in Tibet in the early Ming Dynasty. After Tsongkhapa's death, his two great disciples reincarnated as Hubilahan (incarnation) and passed on their mana, that is, the subsequent Dalai and Panchen Lama. In the late Ming Dynasty, Mongolian Ada Khan believed in Lamaism, and since then, Huangism has been introduced to various Mongolian ministries and has spread throughout Tibet, Qinghai, and Mongolia for thousands of miles. The Qing Emperor Taizong set “Xinghuang Anmeng” as the basic national policy, which has been strictly followed in later generations [7]. As early as four years of Chongde (1639), Emperor Taiji sent an envoy to Dalai Lama and put forward the idea of “extending to monks, preaching Buddhism, and benefiting all beings”, expressing his intention to hire. After entering the customs in the 9th year of the reign of Shunzhi (1652), the fifth Dalai Lama came to Beijing and met with Emperor Shunzhi. Emperor Shunzhi built the West Yellow Temple for his residence and sent a golden book and seal in three languages, HanTibetan. Xi Tian Da Shan freely teaches the ordinary Vachala Lama Lama Dalai Lama under the leadership of the Buddha. “Since then, Dalai Lamatitle and political status of Dalai Lama in Tibet have been officially established by the royal court, and Dalai Lama was named as national Buddhist leader [8, 9]. Hence, the system that Dalai Lama was conferred by the central government was established, so was the affiliation of the Tibetan local government to the Qing central government. The reason why the Qing emperor respected and treated Dalai Lama, in the same way, was also due to political needs. Back then, in the Tibet region, the Dalai was not only a Tibetan political and religious leader, but also the Buddhist Gelug Sect (Huangjiao) he promoted in Mongolia. The general belief of various ministries in respect ofDalai Lama could not only maintain a good relationship with Tibet but also soothe the Mongolian people, making it an active policy for expanding and maintaining the rule of the Qing Dynasty throughout the country [10]. Emperor Kangxi, from his likes and dislikes, was not interested in Taoism or Buddhism. There are many places in Emperor Kangxi's application of Taoism to monks. Nevertheless, Emperor Kangxi still adhered to the established policy of “Respecting Dalai Lama and Fusui Mongolia”, and assigned people to visit Dalai Lama and Panchen Lama from time to time to ask questions and reward valuables. Emperor Kangxi also built the Huizong Temple in Dorrenor in 1691, and in 1713 built the Renren Temple and Shanshan Temple in Rehe. Emperor Yongzheng also acted per the ancestors' instructions. Before the throne, he invested in the purchase of Fayuan Temple and renamed it as the Songzhu Temple for the residence of Emperor Zhangjia. In 1727, Emperor Yongzheng set aside more than 100, 000 yuan to build a temple for Yin Yin in Dornoun and bestowed it to the third Zhangjia Khutuktu as residence.
2 Functional Complexity of Tibetan Buddhist Ceramics Used by Royalties in the Palace During the Qing Dynasty The complexity of Tibetan Tibetan Buddhist ceramics during the Qing Dynasty was shown in three main parts: nature, culture, and economy (see box on page 315). Natural complexity is mainly shown in forest residues and animal communities in which
642
J. Xie
suitable seasonal habitats are identified. The complexity of culture is closely related to the diverse use of Han-Tibetan culture by human beings and the various ethical and religious beliefs on land use (Fig. 1a). The complexity of the economy is related to the diversified use of local Han-Tibetan culture. The local seasonal Han-Tibetan culture is limited, forcing the local economy to diversify for sustainable development. For example, in the “Colturamista” (fine-grained mosaic of fields, orchards, woodlands, and hedges) in the landscape of Tuscany, Italy, a wide range of products can be found, which have reflected the complexity of the microclimate and avant-garde features of hilly and mountainous terrain. Past
Ctural captial
Natural captial
Present
Natural captial
Ctural captial
Natural captial
Future
Natural captial
Natural captial
Ctural captial
Natural captial
Fig 1. Relationship between natural, cultural and economic capital based on the past, present and future conditions
The dotted arrows above indicate weak connections between processes. (a) In the past, when Tibetan Buddhist ceramics were dominant in the court of the Qing Dynasty, each type of capital influenced each other through a feedback mechanism. (b) Current cultural capital is only weakly related to the other two capital types. Economic capital affects natural capital without feedback. (c) In the future, once the public is educated about the unsustainability of the current development model, the relationship between cultural capital and natural capital can be restored. The basic stress process in the Tibetan Buddhist ceramics in the Qing court was a result of the joint effects of natural forces (such as wind, rain, and inheritance) and human activities (such as agriculture, forestry, and livestock grazing). This process has affected two related attributes of the Tibetan Buddhist ceramics in the Qing Dynasty:
Custom Tibetan Buddhist Ceramics
643
Vulnerability and resilience. Vulnerability refers to the fragility of the Han-Tibetan cultural system to changes in its composition and structure caused by disturbance or disturbance mechanisms. It can be used as an indicator of the Han-Tibet cultural system or the landscape condition. Vulnerability has become a popular concept in the environmental assessment and evaluation because it can be used to describe the turnover rate of species and processes in the whole Han-Tibetan cultural system. In the background of the Tibetan Buddhist ceramics in the royal palace of the Qing Dynasty, vulnerability can also be used to monitor the changes in cultural and economic diversity. In this paper, the effect of the Han-Tibetan cultural evolutionary algorithm and state a ¼ hai þ da is analyzed, in which da shows the evolutionary order of the Han-Tibetan culture. In addition, they have verified that the lowest-order Han-Tibetan cultural evolutionary algorithm can generate the diagram of the Han-Tibetan cultural evolutionary algorithm as the following: Y ¼ ½y1 ; ; yn 2 Rhn
ð1Þ
In the above equation, X 0 ¼ hai, Y 0 ¼ a ¼ hda y dai and b are dissipation parameters. In general, x0n , y0n and z0n are complex numbers, in which x0n is the complex conjugate of x0n , and the same is true for z0n . It is set that r ¼ 3:8, b ¼ 3:32, and the 0 equation is iterated. (2) has actual initial parameters x00 , y00 , z00 , x0 0 and z0 . Hence, all 0 0 0 continuous values of n, that is, xn , yn and zn , are true (a more detailed discussion is shown in the “Appendix”). In order to achieve the high complexity and high randomness between the generated cultures of the Qing Dynasty, the evolutionary algorithm diagram of Han-Tibetan culture is independently coupled with NCML, as shown in the following equation: zn þ 1 ð jÞ ¼ ð1 eÞ/ðzn ð jÞÞ þ e/ðzn ðj þ 1ÞÞ
ð2Þ
In the above equation, n ¼ 0; 1; ; L 1 stands for the time index; j ¼ 1; 2; ; T stands for the lattice state index; / stands for the evolutionary algorithm diagram of the Han-Tibetan culture; e 2 ð0; 1Þ stands for the coupling constant; L is the length of the plain text; T is the maximum value of the lattice state index. Here, for the Han-Tibetan cultural evolutionary algorithm diagram, 2 and 3 are selected for T, and e ¼ 0:001 is selected for the other parameters so that they can have excellent features of the HanTibetan cultural evolutionary algorithm. In addition, the periodic boundary condition, that is, zn ðj þ T Þ ¼ zn ð jÞ, is imposed on the system. Equation (1) is applied to (3), and the coupling of two-dimensional logical mapping can be defined as the following: xn þ 1 ¼ ð1 eÞ/ðxn Þ þ e/ðyn Þ yn þ 1 ¼ ð1 eÞ/ðyn Þ þ e/ðxn Þ
ð3Þ
Equation (1) is applied to to (3), and the coupling of Han-Tibetan cultural evolutionary algorithm diagram can be defined as the following:
644
J. Xie
x0n þ 1 ¼ ð1 eÞ/ x0n þ 1 þ e/ y0n þ 1 y0n þ 1 ¼ ð1 eÞ/ y0n þ 1 þ e/ z0n þ 1 z0n þ 1 ¼ ð1 eÞ/ z0n þ 1 þ e/ x0n þ 1
ð4Þ
3 Modern Han-Tibetan Culturology of Tibetan Buddhist Ceramics Used by Royalties in the Palace in the Qing Dynasty The modern technological landscape created by Han-Tibetan culture differs from the Tibetan Buddhist ceramics of the Qing court in many respects. In modern landscapes, economic decision-making is often not balanced by equal consideration of the HanTibetan cultural process. Where the Han-Tibetan cultural elements are not considered, focusing only on the economic elements will create a fragile new technological system in the Han-Tibetan culture, which can result in air and water pollution, wildling attack, and reduced ceramic diversity. In other words, the structure of the mosaic is dominated by economic policy without feedback from natural processes. Technologies that reduce the resilience of natural systems can accommodate natural disturbances, thereby significantly reducing their capability to resist environmental disturbances. For example, when floods and landslides are stronger than the structures that prevent them, these natural processes become ecological disasters (Fig. 2).
Fig 2. Several different Tibetan Buddhist ceramics used by royalties in the palace in the Qing dynasty
The dangerous feature of the development is the relatively slow time scale on which the natural system responds to the interference mechanism produced by the HanTibetan culture. The response of humans and natural systems to disturbances increases the rate of natural system degradation at different time scales and makes them vulnerable to subsequent disruptions. Hence, in the modern environment, it is necessary to consider not only the rough income of food but also the costs and benefits of Han-Tibetan culture in the processing process to achieve the real economic balance. To balance costs and benefits in a broad
Custom Tibetan Buddhist Ceramics
645
sense, it is important to monitor the rate of natural consumption and the loss of services provided by the Han-Tibetan cultural system. Indeed, in the Mediterranean region, most of the surviving Qing court Tibetan Buddhist ceramics was of high value in terms of ceramic diversity and Han-Tibetan cultural diversity.
4 Evaluation of the Value of Tibetan Buddhist Ceramics in the Qing Dynasty The ceramic culture evaluation method can be used to measure not only the efficiency value of each evaluation link but also the volume of redundant inputs and insufficient output of the specific evaluation link compared with the optimal evaluation link to provide the corresponding directions for efficiency improvement. Ceramic culture evaluation methods have two essential characteristics. Firstly, the result of efficiency measurement is not affected by the unit used to measure input-output; secondly, the measured efficiency value shows strictly monotonical decrease with the change of input-output relaxation. It is assumed that there are n decision-making units (DMUs), and each DMU has two vectors of input and output, X 2 Rm and Y 2 Rh , respectively. The input and output matrices are defined as the following: X ¼ ½x1 ; ; xn 2 Rmn , Y ¼ ½y1 ; ; yn 2 Rhn , X [ 0, Y [ 0 gets the possible production set under the condition of constant scale: P ¼ fðx; yÞjx Xk; y Yk; k 0g
ð5Þ
The ceramic culture evaluation based on relaxation measures is expressed as the following: P 1 ð1=mÞ m i¼1 si xi0 P 1 þ ð1=hÞ hr¼1 srþ yr0 8 x0 ¼ Xk þ s > > < s:t: y0 ¼ yk s þ > > : k 0; s 0; s þ 0
q ¼ min
ð6Þ
When the data participates in the evaluation method calculation, Tone relaxes X [ 0 and Y [ 0 to X 0 and Y 0. At this time, (1) For each input element, at least two DMU input values are positive; (2) For each output element, the output value of at least two DMUs is positive, and (3) for each DMU, the maximum of its input and output values are positive. At this time, when the input element of DMU0 in an evaluation link has a value of 0, i.e., xi0 ¼ 0, it is processed in two ways. One is that the input element value of 0 in the evaluation link is meaningless, which is then eliminated; the second is the evaluation link. Significantly, the input element value of DMU0 is 0. (For example, one university in this study does not have books). In this case, a minimal positive number is used to replace the value of xi0 in the evaluation link to ensure that it
646
J. Xie
plays a role in the efficiency measurement. Similarly, the output of yr0 ¼ 0, and the same goes for processing From the economic perspective, the starting point of court Tibetan Buddhist ceramics in the Qing Dynasty is to realize that the landscape mosaic of court Tibetan Buddhist ceramics in the Qing Dynasty, coupled with the tried and tested management system, forms a Han-Tibetan cultural system that can benefit the sustainable development of the human society, not only in crop yields but also in environmental health, quality of life, and Han-Tibetan cultural services. Hence, we should consider the benefits of the Han-Tibetan cultural systems when planning each land-use project in culture and modern landscapes to ensure that human needs are integrated with strategies to ensure sustainable development.
5 Conclusions With its distinct national style and unique artistic charm, Tibetan Buddhist art is an integral part of the Chinese art treasure that embodies the wisdom of the Tibetan people. Besides political and economic relations, the profound influence of Tibetan Buddhism in the royal court is also shown in the close cultural ties. The collection of Tibetan Buddhist-style porcelains collected in the Qing Palace suggests that as minority rulers who invaded and conquered the Central Plains, the Qing emperors accepted Tibetan culture while they fully embraced the Han culture. All the above shows that the close relationship between Tibet and the hinterland, the Tibetan people and the people throughout the country has a long history since ancient times.
References 1. Xue, Z., Yuan, L.I.: Tibetan buddhist heritage in the forbidden city. China Today (5), 66–69 (2017) 2. Haynes, Sarah F.: The taming of the demons: violence and liberation in tibetan buddhism. Journal of Buddhist Ethics 20(1), 177–184 (2013) 3. Kolås, Åshild.: The violence of liberation: gender and tibetan buddhist revival in post-mao china. Religious Studies Review 16(3), 685–686 (2010) 4. Velho, N., Laurance, W.F.: Hunting practices of an indo-tibetan buddhist tribe in arunachal pradesh, north-east india. Oryx 47(3), 389–392 (2013) 5. Zhang, B.: on illnesses and treatments of scholar officials in the qing dynasty from the perspective of dou ke-qin, a neo-confucianist philosopher. Zhonghua Yi Shi Za Zhi 43(6), 331–336 (2013) 6. Ma, W.: Imperial illusions: crossing pictorial boundaries in the qing palaces by kristina kleutghen. China Review International 23(2), 168–173 (2016) 7. Sharapan, M., Härkönen, M.: Teacher-student relations in two tibetan buddhist groups in helsinki. Contemporary Buddhism (2), 1–18 (2017) 8. Shen, S.: Charlene e. marley: the violence of liberation: gender and tibetan buddhist revival in post-mao china: berkeley, ca: university of california press, 2007, 400p. $60.00 hardback; $24.95 paperback. Journal of Han Political Science 15(2), 207–208 (2010)
Custom Tibetan Buddhist Ceramics
647
9. Henriondourcy, I.: The violence of liberation: gender and tibetan buddhist revival in postmao china. by charlene makley. berkeley and los angeles: university of california press, 2007. xvii, 374 pp. $60.00 (cloth); $24.95 (paper). J. Asian Stud. 69(1), 685–686 (2010) 10. Chong, L.S.: Tibetan buddhist vocal music: analysis of the phet in chod dbyangs. Asian Music 42(1), 54–84 (2011)
The Analysis of the Cultural Changes of National Traditional Sports in the Structure Model of Achievement Motivation Yuhua Zhang(&) School of Physical Education, Nanchang University, Nanchang 330031, Jiangxi, China [email protected]
Abstract. The changes of national traditional sports culture include the changes of ethnic sports and the changes of thinking. According to the development trend of the whole sports culture, the changes tend to progress. This paper presents a method to analyze the cultural changes of traditional national sports based on the structural model of achievement. The method of analyzing the national traditional sports culture adopts the optimal foraging theory to improve the way of sports culture change, analyzes the national traditional sports culture through the structural model of achievement motivation, and learns the national traditional sports culture by using the chaos algorithm and reverse learning algorithm. Realization of the changes of traditional ethnic sports culture experimental results show that the model has good convergence accuracy and computational speed, and is especially suitable for the analysis of traditional sports culture changes. Keywords: Achievement motivation structure model theory National traditional sports culture Change
Optimal feeding
1 Introduction Structure model (Structural model, SM) is proposed to solve the function optimization problem, and make up for the deficiency of the traditional statistical method, has become an important tool for multivariate data analysis, has simple operation, less set parameters, faster convergence speed, convergence precision higher advantages, has gradually been applied to the intelligent optimization, pattern recognition, industrial control, image recognition, neural network optimization and clustering analysis in areas such as [1, 2]. Democracy is a national traditional sports cultural heritage down the long history of sports culture characteristics, it has the exact external feature of sports culture characteristic and the model is composed of people physical life indispensable important element [3], at the same time, because the value orientation of the deposits in the people heart and mental standard, is one kind has the force of the internal control effect, have stronger seepage characteristics, to select the people all of the sports culture and sports rules directly affect the [4]. There are three main characteristics of democratic © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 648–655, 2021. https://doi.org/10.1007/978-981-33-4572-0_93
The Analysis of the Cultural Changes of National Traditional Sports
649
traditional sports culture: firstly, it has great inertia and social force. Because of the historical and extensive social foundation of the democratic traditional sports culture, most people have always viewed the traditional sports culture of democracy with a more trustworthy eye [5, 6]. Sometimes social development has shown how traditional sports culture in a democracy, but it will continue to exist for a long time, until the vast majority of people’s understanding and psychological tendency to radically change. When most people in the society tend to be a democratic tradition sports culture, the people of this democratic trend of traditional sports culture from the feeling will be mutual infections between people, mutual contrast and mutual encouragement of rapid expansion, making the democratic tradition sports culture shows a powerful social force scores [7]. Second, democratic traditional sports culture has relative stability. Democratic tradition sports culture has relative stability; an important reason is that from there it came to live in a certain period of time and a certain range of people with special value [8]. Culture as a carrier of traditional sports in the historical vicissitude of transmission and inheritance, to the cultural change of acceleration is faster than a purely genetic method, national traditional sports culture is a good way to research change analysis. Many scholars study cultural change analysis methods are mainly concentrated in the aspect of cultural change selection mechanism, this article will embed the idea of achievement motivation structure model in the SM, put forward a kind of achievement motivation fusion structure model (Structural model of achievement motivation, SMAM) analysis methods of national traditional sports culture change. Firstly, the whole achievement motivation structure model adopts the theory of achievement motivation. Second, the development of national traditional sports culture can be divided into cultural changes, cultural inheritance and cultural evolution, mutation strategy and cloud model theory is introduced, and the difference and reverse chaos theory to improve the way of national traditional sports culture vicissitude analysis; Finally, through the influence function to complete the traditional culture of traditional sports culture of ethnic culture, the storage and dissemination of traditional sports culture.
2 Achievement Motivation Structure Model SMAM algorithm achievement motivation structure model can be divided into three parts: lead the structure model, the following structure model and reconnaissance structure model, the paper analyzes minimization problem, for example expounds the specific optimization process. According to the formula (1) in the solution culture, N national traditional sports culture forms the initial model: xi ¼ xmin þ randðÞ xmax xmin i ¼ 1; 2; . . .; N i i i
ð1Þ
In the formula, rand () is the random number on the [0, 1] interval; And the upper and lower limits of the solution culture.
650
Y. Zhang
According to the fitness value of traditional culture in the model, the former N/2 components were used to lead the structure model, and then N/2 was formed to follow the structural model. In order to guide the traditional culture of traditional culture in the model of the current t-generation, select ethnic traditional sports culture at random and cross-search to generate new sports culture according to formula (2): 0
xi ðtÞ ¼ xi ðtÞ þ rand()ðxi ðtÞ xk ðtÞÞ
ð2Þ
In the formula, rand () is the random number on the [−1, 1] interval. According to formula (3), select the better national traditional sports cultural change leading structure model: x i ð t þ 1Þ ¼
0
0
xi ð t Þ xi ðtÞ
f ð xi ð t Þ Þ [ xi ð t Þ 0 f ðxi ðtÞÞ fxi ðtÞ
ð3Þ
The following structure model according to choice probability type (4) choose lead structure model in the lead structure model, and in its neighborhood in according to the type (2) the search for a new position, to generate new sports culture, and form following structure model: Pi ¼
fiti N=2 P
ð4Þ
fiti
i¼1
In order to avoid the loss in the late iterations change model of diversity, SM will through continuous limit generation change fitness unchanged national traditional sports culture into reconnaissance structure model, and according to the type (1) to produce new sports culture. Through leading structure model, follow the structure model and the change of the structure model of the reconnaissance search, after repeated cycle optimization until the iteration algorithm to the maximum number of iterations or model error precision of the optimal solution to the expected end of algorithm.
3 Analysis of Cultural Changes of Ethnic Traditional Sports Under SMAM From the perspective of SMAM, the analytical methods that conform to the requirements of national traditional sports culture change can be embedded into the framework of the national traditional sports culture change method. This paper will be divided into cultural inheritance, national traditional sports and cultural change traditional sports cultural change and national traditional sports culture, through its own improvement change way to complete the change of national traditional sports culture, using influence function for the spread of culture between all kinds of national traditional sports
The Analysis of the Cultural Changes of National Traditional Sports
651
culture and inheritance, thus accelerating algorithm optimization rate, improve performance optimization algorithm and the adaptive to solve the problem. 3.1
The Changing Mode of National Traditional Sports Culture
The leading structural equation in the structural model of achievement motivation is the optimal subset in the change of national traditional sports culture, so it is regarded as the national traditional sports culture of cultural inheritance. Classic lead structural equation in SM change way is to select a national traditional sports culture and some traditional sports cultural crossover operation, this way there may be two national traditional sports culture are choose to change, also may appear the same national traditional sports culture and multiple cross national traditional sports culture change. Although there is a random factor will ensure the diversity of the new national traditional sports culture, but the direction of the change of national traditional sports culture is relatively stable, the convergence speed. At the same time each iteration process, the optimal national traditional sports culture will find a as it crosses the national traditional sports culture, although there can find better than the possibility of national traditional sports culture, but its operation is the essence of local optimization. Therefore, it is necessary to make two improvements to the way of changing the structure equation. Difference is cultural change analysis method for the analysis of chebyshev polynomial fitting problem and put forward a kind of using floating-point vector coding, in the traditional sports cultural change to a random search optimization algorithm, with the advantages of simple principle, less controlled parameters, using the optimal sorting difference mutation strategy as an optimal rules of the change of national traditional sports culture is: 0
xi ¼ xb þ F ðxr1 xr2 Þ
ð5Þ
Type, random selection of each other is not the same and different from the change of national traditional sports culture are two traditional sports culture, and then prioritize, fitness choose three national traditional sports culture, national traditional sports culture as the optimal, the optimal as national traditional sports culture, the worst as national traditional sports culture; F is the scaling factor. If the fitness difference is small, it is suggested that the traditional sports culture of the two ethnic groups is very close to each other in the culture, and should take larger values to prevent the disturbance from being too small; If the fitness difference is significant, it is suggested that the traditional sports culture of the two ethnic groups is far away from the culture and should take smaller values to limit the disturbance. Here’s a way to determine: Fi ¼ Fl þ ðFu Fl Þ
fr1 fb fr2 fb
ð6Þ
The upper and lower limits of the formula; the fitness of r1, r2 and r1 of national traditional sports culture respectively.
652
3.2
Y. Zhang
Analysis of Cultural Changes in Traditional Sports
Using the following structural equation to form the cultural vicissitudes of national traditional sports culture, SM’s changing mechanism makes it not ideal in the analysis of high-dimensional continuous optimization problems. Mainly because the following search methods of structural equation model is the preferred choice of national traditional sports culture of greed search, though conducive to accelerate the convergence speed, but also increased in solving high dimensional optimization into the local optimal probability. It also accords with the optimal foraging theory: in order to obtain the optimal foraging effect, cultures tend to be more at a lower cost of energy efficiency in the process of feeding [to get more food. The efficiency of traditional sports cultures appeal to: 8 > >
> : F ¼ f ðxi Þf ðxk Þ ; F ¼ F ik ik ki dik
ð7Þ
Set on the traditional sports culture have the greatest value of cultural change, is the cultural inheritance according to the original choice way choice of national traditional sports culture, national traditional sports culture in order to enrich change reference information, for the first t generation of national traditional sports culture change: 0
xi ðtÞ ¼ xi ðtÞ þ rand()ðxi ðtÞ xk ðtÞÞ þ wðxi ðtÞ xnx ðtÞÞ
ð8Þ
In the formula, rand() is the random number of [0, 1]; W is the attraction coefficient. Reverse learning theory can make the algorithm obtain better convergence rate and optimize performance. Chaos mapping makes the generation of traditional sports culture of the ethnic groups iterate, randomness and diversity, and can effectively search the global optimal location outside the convergence zone. By adopting the korder Chebyshev chaotic mapping to complete the ethnic traditional sports culture variation, the fitness is calculated according to the formula (9) and the formula for the analysis of the change of traditional sports culture of the optimal ethnic group is taken: 8 0 F
0.000
Akaike crit. (AIC)
156342.042
Bayesian crit. (BIC)
156383.520
In the regression of model (2), the return rate of a single stock is taken as the dependent variable and investors’ attention is taken as the independent variable. The results show that the attention of investors under the epidemic situation has a significant positive impact on the return rate of stocks (P < 0.01), indicating that the increase of investors’ attention to the epidemic situation will increase the return rate of pharmaceutical stocks. Hypothesis one is true. 4.2
The Impact of Investor Attention on the Volume of Pharmaceutical Shares
In order to test the impact of theoretical investors’ attention on the turnover of pharmaceutical stocks, this study replaced explained variables on the basis of model (1) and established the following model (Table 2):
1218
Y. Xia and W. Hu
Turni;t ¼ a0 þ b1 MRETi:t þ b2 Sizei:t þ b3 BMi:t þ b4 SVIi:t þ ei;t
ð3Þ
Table 2. Descriptive statistical results of the variable Turn
Turn
Coef.
St.Err. t-value p-value
95% Conf
Interval
Sig
Ln SVI
0.027
0.008
3.18
0.001
0.010
0.043
***
MRET
0.028
0.012
2.27
0.023
0.004
0.052
**
Ln BM
0.530
0.487
1.09
0.276
-0.424
1.484
Ln Size
-0.042
0.140
-0.30
0.766
-0.317
0.234
Constant
0.583
3.195
0.18
0.855
-5.680
6.845
Mean dependent var
0.108
SD dependent var
2.995
R-squared
0.001
Number of obs
29600.000
F-test
3.832
Prob > F
0.000
Akaike crit. (AIC)
148756.436
Bayesian crit. (BIC)
148797.914
In the regression model (3), the individual stocks trading volume as the dependent variable, investors pay close attention to as independent variables, the results found that investors concerned about the outbreak of COVID-19 will lead to increase in the short term medical volume (p < 0.01), the higher the investor attention of epidemic diseases, it affects the production of pharmaceutical company, the ability of the company, lead to higher stock trading volume, hypothesis H2 was established. 4.3
The Effect of Investor Attention on the Amplitude of Pharmaceutical Shares
In order to test the influence of theoretical investors’ attention on the amplitude of pharmaceutical stocks, this study replaced explained variables on the basis of model (1) and established the following model (Table 3): SAi;t ¼ a0 þ b1 MRETi:t þ b2 Sizei:t þ b3 BMi:t þ b4 SVIi:t þ ei;t
ð4Þ
In the regression of model (4), the amplitude of an individual stock is taken as the dependent variable and investors’ attention is taken as the independent variable. The results show that investors’ attention to COVID-19 epidemic has a significant positive impact on the amplitude of the stock (P < 0.01). It is assumed that H3 is true.
Impact of COVID-19 Attention on Pharmaceutical Stock Prices
1219
Table 3. Descriptive statistical results of variable SA
Interval
Sig
0.000
95% Conf 0.003
0.003
***
-27.57
0.000
-0.003
-0.002
***
0.004
-21.50
0.000
-0.089
-0.074
***
0.030
0.001
27.09
0.000
0.027
0.032
***
-0.632
0.025
-25.46
0.000
-0.680
-0.583
***
SA
Coef.
St.Err.
t-value
p-value
ln SVI
0.003
0.000
43.53
MRET
-0.003
0.000
Ln BM
-0.081
ln Size Constant
Mean dependent var
0.040
SD dependent var
0.028
R-squared
0.204
Number of obs
29600.000
F-test
1884.395
Prob > F
0.000
Akaike crit. (AIC)
-138839.587
Bayesian crit. (BIC)
-138798.109
5 Conclusion This paper studies the changes of medical stock market based on investors’ attention under emergencies. The research results show that: First, Baidu Index can reflect investors’ attention to a certain extent. Compared with traditional proxy variables and media proxy variables, Baidu Index can reflect investors’ attention more accurately and instantly. Second, the research results show that the increase of investor attention is accompanied by the increase of stock market volatility, which will also have a positive impact on investor attention. The increase of investors’ attention has a significant positive impact on the stock market yield, volume and amplitude in a short time. Third, the COVID-19 outbreak has had an impact on the overall stock market, but the virus outbreak has had the most significant impact in the pharmaceutical sector. Acknowledgements. This work was supported by the National Natural Science Foundation of China (NSFC) under Grant [numbers 71971169].
References 1. Malkiel, B.G., Fama, E.F.: Efficient capital markets: a review of theory and empirical work. J. Finance 25(2), 383–417 (1970) 2. Huberman, G., Regev, T.: Contagious speculation and a cure for cancer: a nonevent that made stock price soar. J. Finance 56(1), 387–396 (2001) 3. Hirshleifer, D., Teoh, S.H.: Limited attention, information disclosure, and financial reporting. J. Account. Econ. 36(1–3), 337–386 (2003) 4. Simon, H.A.: A behavioral model of rational choice. Q. J. Econ. 69, 99–118 (1995) 5. Da, Z., Engelberg, J., Gao, P.: In search of attention. J. Finance 66(5), 1461–1499 (2011)
1220
Y. Xia and W. Hu
6. Wang, C., Xu, L.: Investors pay attention to new research progress. J. Shanghai Univ. Finance Econ. 11(05), 90–97 (2009). (in Chinese) 7. Yu, Q., Zhang, B.: Investors’ limited attention and stock returns – an empirical study with Baidu Index as the focus. Financial Res. (08), 152–165 (2012). (in Chinese) 8. McDonnell, C.: Global stock outlook remains positive under the epidemic. 21st Century Economics Report, no. 004, 21 Febuary 2020 9. Engelberg, J., Sasseville, C., Williams, J.: Market madness? The case of mad money. Manage. Sci. 58(2), 351–364 (2012) 10. Kahneman, D.: Attention and Effort. Prentice-Hall, Englewood Cliffs (1973) 11. Yi, Z., Liu, Y., Xu, S., Wang, S.: Empirical analysis of SARS impact on Chinese stock market. Manag. Rev. 05, 3–7+63 (2003). (in Chinese)
Influencing Factors of Logistics Cost Based on Grey Correlativity Analysis in Transportation Wei Bai1(&), Xiaoyu Pang1, and Wei Zhang2 1
2
China Academy of Transportation Science, Beijing 100013, China [email protected] Ningxia Construction Investment Group Co., Ltd., Yinchuan 750000, China
Abstract. Transportation has achieved rapid development from “bottleneck restriction”, “basic mitigation” to “preliminary adaptation” to “moderate advancement”, and has already possessed the basic conditions of “first lead”. Transportation promotes the logistics industry to reduce costs and increase efficiency is not a unilateral income, but a mutually beneficial interaction process, especially the development of the logistics industry is of great benefit to the comprehensive transportation construction. This paper takes the transportation link of logistics as the research object, analyzes the influencing factors of the transportation industry’s various factors on the cost reduction and efficiency of the logistics industry, and uses the gray correlativity analysis method to analyze the influencing factors. Keywords: Logistics cost
Transportation Grey correlativity analysis
1 Introduction Reducing logistics costs is an objective requirement for industrial transformation and upgrading after China’s economy enters a new normal state, and it is also one of the key contents of China’s recent economic work. Transportation is the logistics link with the largest total logistics volume, the most connected logistics links and the widest service market. It has the basic and main role in the development of logistics industry, and is the pioneer of national economic and social development, promoting the logistics industry to increase costs. Efficiency is the proper meaning of transportation and the inevitable choice for the development of the industry itself. Nowadays, the measurement of logistics cost is not objective. China usually uses the ratio of total logistics cost to GDP to measure logistics cost [1]. The concept can be understood as the logistics cost that needs to be paid for each unit of GDP. This indicator originated in the United States and is used by the world. However, the United States used this indicator to measure the prosperity of the logistics industry. During the financial crisis, this indicator once fell below 8%, causing the United States to worry about the decline of the logistics industry, and strive to improve this [2]. Indicators to restore the prosperity and development of the logistics industry. It can be seen that the ideal state of “the ratio of total logistics costs to GDP” is in a reasonable range that © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1221–1227, 2021. https://doi.org/10.1007/978-981-33-4572-0_175
1222
W. Bai et al.
conforms to the national conditions of a country. Too high indicates that social logistics expenditure is high and economic operation efficiency is low. Too low reflects the decline and depression of the logistics industry. However, it is not objective to interpret China’s logistics costs simply by taking “the total cost of logistics as a percentage of GDP”. The reasons for the impact of logistics costs in the logistics process cannot be analyzed. Therefore, this paper analyzes the cost influencing factors of logistics based on transportation, finds the impact indicators from the reasons that affect logistics costs, uses the gray correlativity analysis method to quantitatively analyze the indicators, proposes relevant suggestions for transportation to promote cost reduction and efficiency improvement in the logistics industry.
2 Grey Correlativity Analysis Theory The grey correlativity analysis method is based on the degree of similarity or dissimilarity between the development trends of factors [3], the specific steps of the grey correlativity analysis method are as follows: (1) Let n data sequences form a matrix of m * n, where m is the number of indicators [4]. As shown in the Eq. 1. 8 0 d ð1Þ d20 ð1Þ . . . dn0 ð1Þ > > < 10 0 0 d1 ð2Þ d20 ð2Þ . . . dn0 ð2Þ D1 ; D2 ; . . .; D0n ; ¼ ... ... ... > > : 0 d1 ðmÞ d20 ðmÞ . . . dn0 ðmÞ
ð1Þ
(2) Determine the reference sequence. The reference sequence is as shown in Eq. 2. D00 ¼ d00 ð1Þ; d00 ð2Þ; . . .; d00 ðmÞ
ð2Þ
(3) Dimensionless indicator data. (4) Calculate the absolute difference between each of the evaluated target index sequences (comparative sequences) and the corresponding elements of the reference sequence one by one. (5) Calculate the correlation coefficient. Calculate the correlation coefficient, and calculate the equation [5] as follows: ni ð k Þ ¼
min minjd0 ðkÞdi ðkÞj þ q max maxjd0 ðk Þdi ðk Þj i
i
k
k
jd0 ðkÞdi ðk Þj þ q max maxjd0 ðk Þdi ðkÞj i
k ¼ 1; . . .; m
k
ð3Þ
Where q is the resolution coefficient, 0 < q < 1. Usually q takes 0.5 (6) Calculate the associative order. The associative sequence [6] is shown in Eq. 4. R0i ¼
m 1X f ðkÞ m k¼1 i
ð4Þ
Influencing Factors of Logistics Cost
1223
(7) According to the correlation order of each observation object, the analysis result is obtained.
3 Logistics Cost Impact Indicator Selection The logistics industry is a complex industry that is highly permeable, cross-industry, cross-regional, cross-sectoral. It is also a field of deep integration with “Internet+”, so there are many factors that may increase the cost of logistics and transportation, such as China’s economic market [7]. The situation, transportation costs of various modes of transportation, etc. Through the study of literature and field research, it summarizes the three factors that have the greatest impact on China’s logistics costs. 3.1
Market Economy
After the Chinese economy enters the new normal, the space for enterprises to continue to expand profits through production capacity has also shrunk dramatically, and the downward pressure on the economy has increased. Most of the real economy is facing a double test of overcapacity and cost pressure [8]. In 2015, the total profit of industrial enterprises above designated size fell by 2.3% year-on-year, which was the first decline in many years. In this context, although China’s logistics performance continues to improve, as the real economy enters the “excessive economy” era from the “scarce economy” era, the higher the logistics cost compared to the “scarce economy”, the real economy vs. the “excess economy” The reduced logistics costs are more sensitive. 3.2
Transport Organization
From the perspective of logistics users, it is considered that the high logistics cost means that the total logistics cost is high, that is, the product of unit logistics cost and total logistics volume is large [9]. The survey found that although the unit logistics cost in China is lower than other countries, due to the many links and low efficiency of transportation organizations, transportation “segmentation”, “spelling” and “separate battles” make the actual logistics activities not intensive and inefficient. In particular, the transportation chain “can not be timed, the process is opaque, and the risk is uncontrollable”. The safety stocks of the upstream enterprises and the goods in the warehouse are difficult to reduce, and the storage costs and management expenses are also raised. 3.3
Transportation Cost
The development of transportation and the development of the logistics industry are complementary. Accelerating transportation construction is precisely to improve the traffic capacity and transportation efficiency of the transportation, thereby reducing the logistics cost. Taking highways as an example, in recent years, China’s logistics industry coincides with the peak and valley heights of highway mileage growth rates [10], and basically achieves “same-frequency resonance”. However, because the toll
1224
W. Bai et al.
crossing fee is one of the main components of the cost of the trunk transportation enterprise, and the trunk transportation enterprises and long-distance freight drivers are in the “low profit era” for a long time, the toll of the expressway will lead to an increase in transportation costs and also lead to logistics. The increase in cost. In summary, this paper selects the economic unit-related gross national product, the freight volume of various modes of transportation related to transportation organizations, and the transportation unit price of various modes of transportation as the indicators. The indicator structure is shown in Fig. 1.
Logistics cost impact indicator
Transport organization
Market economy
Transportation cost
Air Freight unit price X9
Water freight unit price X8
Road freight unit price X7
Rail freight unit price X6
Total social retail X1
Air Freight volume X5
Water freight volume X4
Road freight volume X3
Railway freight volume X2
Fig. 1. The indicator structure
4 Case Analysis This paper takes the relevant index data from April 2016 to February 2019 provided by the National Bureau of Statistics as the research object, and carries out an example analysis. The influencing factors data table is shown in Table 1. In the table, X0 represents logistics costs and the unit of X0 is yuan/kg*km, the unit of X1 is 100 million yuan, the unit of X2, X3, X4 and X5 is 10,000 tons, and the unit of X6, X7, X8 and X9 is Yuan/kg*km. The reference series in this article is logistics costs (Yuan/kg*km), and the comparison series are total social retail (100 million yuan), railway volume (10,000 tons), road volume (10,000 tons), water volume (10,000 tons), air volume (10,000 Tons), rail unit price (Yuan/kg*km), road unit price (Yuan/kg*km), water unit price (Yuan/kg*km), air unit price (Yuan/kg*km). According to the gray theory, the above units and dimensions are different, and the initial values are different. Therefore, dimensionless initialization should be performed, that is, the array of each sequence is divided by the first data. The calculated correlation coefficient table is shown in Table 2. According to the Table 2, the correlation degree of each sample can be calculated, and the result is shown in Fig. 2.
Influencing Factors of Logistics Cost
1225
Table 1. 2016.04–2019.02 logistics cost influencing factors data Time 2016.04 2016.05 2016.06 2016.07 2016.08 … 2018.10 2018.11 2018.12 2019.01 2019.02
X0 7 7 7 7 7 …
X1 24645.8 26610.7 26857.4 26827.4 27539.6 … 35534.4 35259.7 35893.5 47551.7 66064
X2 26100 26575 25700 26349 27931 … 35460 35081 36756 36756 29769
X3 283300 289967 284700 283229 294982 … 362870 372017 318985 318985 167329
X4 50300 52829 54900 53428 54230 … 64022 65102 57264 57264 49404
X5 54 55 54 52 53 … 63 66 67 67 38
X6 4 4 4 4 4 … 5 5 5 3 3
X7 10 10 10 10 10 … 9 9 9 8 8
X8 3 3 3 3 3 … 2 2 2 2 2
X9 23 23 23 23 23 … 20 20 20 18 18
Table 2. Correlation coefficient R01(k) R02(k) R03(k) R04(k) … R08(k) R09(k)
R0i(1) 0.94 0.53 0.38 0.5272 … 1.00 0.85
R0i(2) 0.90 0.55 0.43 0.54 … 1.00 0.24
R0i(3) 0.53 0.53 0.47 0.52 … 1.00 0.24
R0i(4) 0.85 0.53 0.44 0.49 … 1.00 0.88
… … … … … … …
R0i(32) 0.77 1.00 1.00 0.93 … 0.55 0.78
R0i(33) 1.00 0.66 0.54 1.00 … 0.55 0.73
R0i(34) 1.00 0.66 0.54 1.00 … 0.33 0.56
R0i(35) 0.84 0.73 0.77 0.73 … 0.73 0.67
Fig. 2. Sample correlation analysis
Calculate the weight of each indicator based on the sample correlation. The descending order of the final calculated weights is shown in Fig. 3.
1226
W. Bai et al.
Fig. 3. The indicator weights
As can be seen from Fig. 3, the correlation analysis result is R02 > R03 > R01 > R07 > R06 > R04 > R09 > R05 > R08. Therefore, it is concluded that the relationship between the listed nine transportation indicators and logistics costs is relatively high. Among them, the three most relevant logistics indicators are railway freight volume, road freight volume and total social retail. Followed by the unit price of road freight, the unit price of railway freight and water freight volume. The final indicators are water freight unit price, air freight volume and water freight unit price. From the data analysis, the most important factor affecting logistics costs is the freight capacity of transportation, mainly based on railway transportation and road transportation. The base value of railway and road transportation is very large, even if the growth rate is small, it is also logistics. Cost has a certain impact. In addition, the transportation costs of various modes of transportation also have an impact on logistics costs. With the decline in the cost of freight transportation in recent years, logistics costs have also declined.
5 Conclusion This paper mainly analyzes the factors affecting logistics costs. A total of nine influential factors are discussed, namely total social retail, railway volume, road volume, water volume, air volume, rail unit price, road unit price, water unit price, air unit price. Through the gray analysis method, each index is compared with the logistics cost, and finally the highest impact on the logistics cost is the freight volume of the railway and the highway. If the transportation volume of railways and highways increases, the transportation cost of logistics can be reduced, and the overall efficiency of logistics can be improved.
Influencing Factors of Logistics Cost
1227
References 1. Song, H., Wang, L., Min, H.: The status and development of logistics cost management: evidence from Mainland China. Benchmarking Int. J. 16(5), 657–670 (2009) 2. Wang, X., Yang, Y., Wang, J.: Influencing factors and paths of transportation to promote logistics cost reduction and efficiency. Integr. Transp. 40(4), 73–78 (2018). (in Chinese) 3. Yin, Y.M., et al.: Prediction of the vertical vibration of ship hull based on grey relational analysis and SVM method. J. Mar. Sci. Technol. 20(3), 467–474 (2015) 4. Wei, G.: Grey relational analysis model for dynamic hybrid multiple attribute decision making. Knowl.-Based Syst. 24(5), 672–679 (2011) 5. Sun, C.C.: Combining grey relation analysis and entropy model for evaluating the operational performance: an empirical study. Qual. Quant. 48(3), 1589–1600 (2014) 6. Malekpoor, H., et al.: Integrated grey relational analysis and multi objective grey linear programming for sustainable electricity generation planning. Ann. Oper. Res. 8, 1–29 (2017) 7. Wei, C.: Analysis of the impact of logistics industry development on international trade based on gray correlation analysis method——taking Tianjin Binhai new area as an example. Logist. Technol. 5, 257–259 (2014). (in Chinese) 8. Xun, G., Cortese, C.: A socialist market economy with Chinese characteristics: the accounting annual report of China Mobile. Account. Forum 41(3), S0155998216301545 (2017) 9. Xu, Y.W., et al.: The analysis of the characteristics and problems on Chinese hub airport ground transportation organization. Huazhong Archit. 24(7), 631–646 (2012). (in Chinese) 10. Li, Z., et al.: Estimating transport costs and trade barriers in China: direct evidence from Chinese agricultural traders. China Econ. Rev. 23(4), 1003–1010 (2012). (in Chinese)
Data Analysis on the Relationship of Employees’ Stress and Satisfaction Level in a Power Corporation in the Context of the Internet Yuzhong Liu1,2, Zhiqiang Lin3(&), Zhixin Yang4, Hualiang Li1,2, and Yali Shen1,2 1
2
Key Laboratory of Occupational Health and Safety of Guangdong Power Grid Co., Ltd., Guangzhou, China Electric Power Research Institute of Guangdong Power Grid Co., Ltd., Guangzhou, China 3 Jingzhou Central Hospital, Jingzhou, China [email protected] 4 Guangdong Power Grid Co., Ltd., Guangzhou, China
Abstract. The article is to explore the relationship of employees’ stress and satisfaction level in a power corporation and the influence of their mental health on job satisfaction in the context of the Internet. 3 assessment scales based on mobile Internet were conducted to measure and analyze the stress level, mental health level and job satisfaction of a sample of 36 employees of a power corporation of southern China. Methods of Chi Square test, correlation analysis and multiple variable regression analysis were used in data analysis. Results indicate that differences exist among different types of work in such aspects as the total score, coercion score, interpersonal sensitivity score, depression score, anxiety score, terror score, bigotry score, and psychotic score. There are bipartite correlations among stress level, mental health and job satisfaction. It is revealed that the effect of stress level on job satisfaction was greater than that of mental Health (P < 0.001). The factors of depression level in mental health significantly affect job satisfaction, sense of responsibility, satisfaction with working conditions and extrinsic reward level. Somatic symptoms and depression symptoms in mental health affect the overall job satisfaction and extrinsic reward level of employees, which can explain the 40.8% variance of the overall job satisfaction. The variance of extrinsic rewards was 33.5%, which indicates that somatic symptoms and depressive symptoms have significant effects on overall job satisfaction and extrinsic rewards. Keywords: Data analysis Power corporation employees Mental health Job satisfaction
Stress level
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1228–1234, 2021. https://doi.org/10.1007/978-981-33-4572-0_176
Data Analysis on the Relationship of Employees’ Stress
1229
1 Introduction In the context of the Internet times, with the fierce competition and the aggravation of the reform, enterprises of all fields pay more and more attention to employees’ satisfaction. Over recent years, psychological quality training for employees has become an important aspect to improve employees’ satisfaction, but there is no scientific consensus on which psychological factors affect employee satisfaction. In combing past studies involving other groups on job satisfaction [1, 2], another important factor mental health was found in the study of enterprise employees, intellectuals, medical staff, taxi drivers, which has proved that the individual’s mental health can have an important impact on job satisfaction. Although some studies [3–5] have been conducted on the mental health of power corporation employees, 38, few have explored the effect of employees’ mental health on job satisfaction in a power corporation. In addition, a number of relevant literature studies [4, 5] have concluded that, although not using the same stress scale, the correlation between stress and job satisfaction and further study also found that the level of stress has a certain predictive effect on job satisfaction. Many scholars have found that stress is one of the important factors that affect mental health, so this paper studies the psychological factors that affect employee satisfaction from the relationship between stress level, mental health and employee satisfaction.
2 Object and Methods According to the overall distribution of employees in the research corporation, stratified random sampling was applied to select 48 employees, including 16 managerial posts, 16 professional and technical posts and 16 technical posts. 43 Questionnaires were handed out and 36 valid ones retrieved, with the effective rate of 75.0%. Among them, 22 were males (61.1%) and 14 females (38.9%). The age distribution was 22–58 years old and the working age distribution was 2–42 years. In aspect of mental health, Symptom Checklist-90 (SCL-90) scale was adopted. SCL-90 scale consists of 90 items, which have the characteristics of large capacity and rich reaction symptoms, including feeling, thinking, emotion, consciousness, behavior, life habit, interpersonal relationship, diet and sleep, etc. SCL-90 scale is divided into 10 factors reflecting 10 aspects of psychological symptoms, namely, Somatization, obsessive-compulsive symptoms, interpersonal sensitivity, depression, anxiety, hostility, fear, paranoia, psychosis and other (such as diet, sleep). In terms of stress state, the psychological stress test scale (PSTR) was used in this study. PSTR is based on the theory of psychological stress factors proposed by German psychologist Murray in 1968 and compiled by Swiss psychologist Edworth in 1983. The PSTR scale includes 50 items, including physical symptoms, cognition and emotional reaction. The frequency of items is used to evaluate the psychological stress level of the subjects. In aspect of job satisfaction, the short form of Minnesota Satisfaction Questionnaire (MSQ-20) was applied. The MSQ-20 scale was developed by Weiss, Dawis, England & LOFQUIST (1967). There were 20 items in this questionnaire, which included 4
1230
Y. Liu et al.
dimensions of satisfaction with working conditions (6 items), satisfaction with leadership (2 items), responsibility (6 items), and external rewards (6 items). All participants anonymously logged on to internal network platform of the selected corporation to complete all the 3 scales. All data were recorded in SPSS 22.0 to conduct statistical processing and analyzing.
3 Results of the Data Analysis 3.1
Status of Work Stress, Mental Health Level and Job Satisfaction
There are national norms of work stress, mental health level and job satisfaction of employees in power corporations, and the measuring tools of work stress, mental health and job satisfaction were used in this study. Therefore, in the analysis of the status quo of the staff, the staffs in the 3 variables on the score and the norm value were compared to judge the basic situation. In addition, this study also discussed the differences of job types on the scores of each variable. The mean value of total stress of employees was 64.69, which was lower than the norm value of 65. The average stress of skilled workers was 76.20, which was higher than the norm of excessive stress 65 and higher than the norm of stress warning 73. It can be seen that there is a certain amount of pressure on power grid employees. Relatively speaking, the pressure on management and professional and technical employees is moderate, and the pressure on skilled employees is too great. There was no difference in employees’ stress scores by job type (Table 1). Table 1. The values of employees’ stress level (x ± s) The job types in Management Number 36 Values 64.69 ± 35.791
the samples Professional Technical Total 9 12 15 57.78 ± 21.29 55.50 ± 28.739 76.20 ± 45.27
The mean value of the total mental health score of the employees is 0.6636, which is lower than the norm value of 1.000. In addition, the average scores of the indexes in every dimension are all lower than the norm value, which shows that the mental health level of the employees of the power grid company is more satisfactory. It is worth mentioning that eight aspects of the total score index, coercion, interpersonal sensitivity, depression, anxiety, terror, paranoia and psychosis shows no significant differences among different types of employees (Table 2). The average score of employee satisfaction of samples is 49.35, which is lower than the norm of 58; the average score of responsibility dimension is 15.22, which is lower than the norm of 21; the average score of job satisfaction is 15.54, which is lower than the norm of 17; the average score of leader satisfaction is 5.32, which is higher than the norm of 5; the mean value of extrinsic reward satisfaction is 13.25, which is lower than the norm of 15. From this, we can see that the sampling employees’ satisfaction is
Data Analysis on the Relationship of Employees’ Stress
1231
Table 2. The scores of employees’ working satisfaction level (x ± s)
Number Total scores Responsibility Working condition satisfaction Leaders satisfaction External reward scores
The job types in Management 36 49.35 ± 23.44 15.22 ± 7.66 15.54 ± 6.76
the samples Professional 9 55.18 ± 18.95 17.48 ± 5.29 17.66 ± 5.26
Technical 12 54.18 ± 24.40 16.48 ± 8.17 16.88 ± 6.90
Total 15 41.98 ± 24.52 12.86 ± 8.22 13.20 ± 7.13
5.32 ± 2.621
5.33 ± 1.48
5.55 ± 2.69
5.13 ± 3.18
13.25 ± 8.02
14.68 ± 8.49
15.26 ± 7.61
10.78 ± 7.92
relatively lower, but they are more satisfied with the leaders, and the employees’ sense of responsibility, satisfaction with working conditions, and satisfaction with external rewards are relatively lower. 3.2
The Effect of Job Stress and Mental Health on Job Satisfaction
In order to explore the effects of work stress and mental health on job satisfaction of sampling employees, firstly, correlation analysis was conducted on the total scores and scores of work stress, mental health and job satisfaction. From the results, we can see that there are bipartite significant correlations among the three variables. In all dimensions of job satisfaction, other than the satisfaction of leadership, overall and other satisfactions are related to job stress and mental health. In all dimensions of mental health, somatic symptoms were not correlated with other dimensions except the sense of responsibility and satisfaction. On the basis of correlation analysis, simple linear regression analysis was applied to explore the effects of stress level and mental health status on job satisfaction. From the regression analysis of overall job satisfaction, it can be seen that job stress has a significant effect, which can explain 50% of job satisfaction (ΔR2= 0.500, P < 0.01). Further research on the different dimensions of job satisfaction shows that stress level has a significant impact on the different dimensions of job satisfaction, and the biggest impact is the sense of responsibility, which can predict the 55% variance of employee responsibility (ΔR2 = 0.550, P < 0.01). And the least influence is the satisfaction towards leaders, which can predict the 14.4% variance (ΔR2 = 0.144, P < 0.01). From the regression analysis of overall job satisfaction, it can be seen that mental health has a significant effect, which can explain 23.4% of employee satisfaction (ΔR2 = 0.234, P < 0.01). Further research on different dimensions of job satisfaction shows that mental health has a significant impact on three dimensions: responsibility, job satisfaction and extrinsic rewards, which can explain the 36% variance of employee responsibility (ΔR2 = 0.36, P < 0.01), the 24.5% variance of job satisfaction (ΔR2 = 0.245, P < 0.01), and the 14.4% variance of extrinsic reward (ΔR2 = 0.144,
1232
Y. Liu et al. Table 3. The effect of job stress and mental health on job satisfaction
Scale item
Satisfaction types in the samples
PSTR SCL90 Notes:
**
Total satisfaction
Responsibility
b
b
ΔR2
ΔR2
Working condition satisfaction b ΔR2
Leaders satisfaction
External rewards
b
b
ΔR2
ΔR2
−0.46** 0.500** −0.16** 0.55** −0.13** 0.484** −0.03* 0.144* −0.14* 0.407** −16.26* 0.234** −6.59** 0.36** −4.79* 0.245** −0.503 0.18 −0.4.36 0.144 means P < 0.01,
*
means P < 0.05.
P < 0. 01). Mental Health Status has no significant effect on the satisfaction of leaders, and the result was the same as that of correlation analysis (Table 3). The results of multiple linear regression analysis show a particularly interesting phenomenon: in discussing the mental health status of the influence of job satisfaction, mental health status significantly affect job satisfaction, while in discussing specific factors which affect the job satisfaction of mental health, most of the dimensions (force, interpersonal sensitivity, anxiety, hostility, terror, paranoia, psychosis, other) have no effect on job satisfaction. The factors of depression level in mental health significantly affect job satisfaction, sense of responsibility, satisfaction with working conditions and extrinsic reward level. Somatic symptoms and depression symptoms in mental health affect the overall job satisfaction and extrinsic reward level of employees, which can explain the 40.8% (ΔR2 = 0.408, P < 0. 01) variance of the overall job satisfaction. The variance of extrinsic rewards was 33.5% (ΔR2 = 0.33, P < 0. 01), which indicates that somatic symptoms and depressive symptoms have significant effects on overall job satisfaction and extrinsic rewards.
4 Discussion According to the results of this study, there is a certain amount of pressure on power grid employees, and relatively speaking, there is relatively higher pressure on skilled employees. The overall state of mental health of power grid employees is good. Compared with different types of employees, skilled employees are relatively worse in aspects of general mental health, compulsion, interpersonal sensitivity, depression, anxiety, terror, paranoia and psychosis than managerial and technical employees. This result is consistent with the previous literature [6–8]. The skilled employees are generally the grass-roots employees or the grass-roots managers, who are faced with many stressful situations. The security of the working environment and the stability of the work arrangement have a great impact on them, and the team coordination, the superior manager’s unreasonable arrangement, the family work interaction influence also can cause them to have the psychological pressure, leading to psychological imbalance, psychological stress condition, and the worsening of mental health level. In view of the above results, it is suggested that enterprises can take the skilled workers as the key work object of psychological service, and pay attention to the
Data Analysis on the Relationship of Employees’ Stress
1233
improvement of employees’ mental health quality while carrying out the skill training, to let employees acquaint with some common sense of mental health and means of coping with stress to avoid enterprise crisis events caused by employees’ mental health or excessive stress. Compared with other enterprises, power corporation employees’ job satisfaction is relatively low. In addition, the sense of responsibility, satisfaction with working conditions and extrinsic rewards are all lower than the norm, and the satisfaction of power grid employees is higher than the norm. To analyze the reasons, it can attribute to the power company’s own work responsibilities, work systems, and the employees themselves. Regarding the system aspect, many studies have revealed the full analysis [8–11]. As for the employees themselves, there are two aspects that they should pay attention to, one is to fully understand the direction of self-development, to understand the significance of the position for personal growth and for the enterprise, even for the national infrastructure; the other is to understand themselves, pay attention to the level of mental health quality, correct irrational beliefs, and enhance self-resilience.
5 Conclusions and Suggestions This study reveals the relationship between work stress, mental health and job satisfaction, and more importantly shows the influence of job stress on job satisfaction is more significant. And mental health status has a significant impact on job satisfaction and sense of responsibility, satisfaction with working conditions and external rewards. It is shown that somatization has a significant effect on overall satisfaction and extrinsic reward, and depression has a significant effect on job satisfaction and sense of responsibility, working condition satisfaction and extrinsic reward. It is undeniable that in the process of measurement, there is no specific work stress scale for the power corporation staff and the mental health scale and stress scale inevitably overlap components to some degree, which has also had an impact on the results of the measurements. Moreover, the effect mechanism of the stress level and mental health on job satisfaction needs further explanation, which is the deficiency of the research and will be further improved in the future research. Enterprises can make corresponding measures in the selection, training and other aspects, such as the selection of high-level individuals in the selection of mental health, in training to join mental health counseling and emotional regulation courses. According to the different types of work of the employees, the corresponding measures should be formulated to reduce the work pressure of the employees in the power system, to strengthen the psychological guidance to the employees according to the different characteristics of the individuals, to set up special institutions and service personnel to meet the demands of the employees, to develop more flexible and humane support measures and so on. From the point of view of individual employees in power system, they can also make themselves more satisfied with their jobs and realize more values by adopting the adjustment method suitable for themselves, correctly coping with and resolving the pressure in work, and improving their mental health level, from a long-term perspective, to achieve individual and collective common development.
1234
Y. Liu et al.
Acknowledgments. This research was supported financially by the China Southern Power Grid (Grant No. GDKJXM20185761 and GDKJXM20200484).
References 1. Chen, S., Yang, F., Tang, K., et al.: Simulation analysis on biological effect of ultra-high voltage power frequency electromagnetic filed on human body. Guangdong Electric Power 30(10), 111–115 (2017). (in Chinese) 2. Johansen, C.: Exposure to electromagnetic fields and risk of central nervous diseases among employees at Danish electric companies. Ugeskr Laeger 164(1), 50–54 (2001) 3. Li, C.Y., Chen, P.C., Sung, F.C., et al.: Residential exposure to power frequency magnetic field and sleep disorders among women in an urban community of Northern Taiwan. Sleep 25(4), 428–432 (2002) 4. Wei, W.: Life quality survey of Guangzhou industrial frequency electromagnetic field operators. Guangdong Med. 30(7), 1147–1149 (2009). (in Chinese) 5. Chen, Q., Pan, X., Chu, Q., et al.: Epidemiological studies on depression and influencing factors in telecommunication workers. J. Third Mil. Med. Univ. 30(12), 1186–1188 (2008). (in Chinese) 6. Feng, J., Qin, Q.: A review of the research on job satisfaction. Psychol. Sci. 17(3), 133–136 (2004). (in Chinese) 7. Ma, S., Wang, C., Hu, J., Zhang, X.: The relationship between job stress, job satisfaction and turnover intention: the moderating effect of psychological capital. Chin. J. Clin. Psychol. 34(2), 140–143 (2010). (in Chinese) 8. Liu, P., Xie, J., Jing, R.: An empirical study on the relationship between job stress and job satisfaction in state-owned enterprises. Chin. Soft Sci. 12(4), 121–126 (2012). (2005). (in Chinese) 9. Yan, A., Wei, Y.W.: An empirical study on job stress and job satisfaction of grass-roots employees in small and medium-sized private enterprises. J. Manag. 42(7), 222–229 (2007). (in Chinese) 10. Hepuying, Wang, Z., Zhou, H., et al.: Effects of urban high-intensity electromagnetic radiation on neurobehavioral function in working populations. Lab. Med. Clin. Pract. 4(6), 568–569 (2007). (in Chinese) 11. Cooper, A.R., Van Wijngaarden, E., Fisher, S.G., et al.: A population-based cohort study of occupational exposure to magnetic fields and cardiovascular disease mortality. Ann. Epidemiol. 19(1), 42–48 (2009)
Review on Biologic Information Extraction Based on Computer Technology Yuzhong Liu1,2, Zhiqiang Lin3(&), Zhixin Yang4, Hualiang Li1,2, and Yali Shen1,2 1
Key Laboratory of Occupational Health and Safety of Guangdong Power Grid Co., Ltd., Guangzhou, China 2 Electric Power Research Institute of Guangdong Power Grid Co., Ltd., Guangzhou, China 3 Jingzhou Central Hospital, Jingzhou, China [email protected] 4 Guangdong Power Grid Co., Ltd., Guangzhou, China
Abstract. With the growing development of computer technology and sensor technology, artificial intelligence has been applied to a variety of scenarios, providing solutions to problems in different industries. It is a new dimension in the research of machine vision artificial intelligence to perceive and recognize human biological information, such as physiological state, emotion and biological identity, which contributes to making auxiliary judgment and decision. Based on the extraction of human body-related biological information, this paper aims to review on three aspects which are image-based physiological signal acquisition, sleep information extraction and image-based emotion recognition. With respect to image-based physiological signal acquisition, blind source separation and Eulerian video magnification technology are two more commonly used methods for non-contact video to extract pulse information. Based on the principle of photo plethysmography (PPG), there are also many other methods such as signal weighting analysis and supervised learning algorithm, which are gradually applied to non-contact video acquisition of pulserelated physiological signals. In aspect of sleep information extraction, sleep quality can be analyzed through heart rate and breathing signals. When it comes to image-based emotion recognition, there is still some room for improvement in the accuracy of parameter extraction. Keywords: Biologic information extraction Computer technology Physiological signal acquisition Sleep information extraction Emotion recognition
1 Introduction With the growing development of computer technology and sensor technology, artificial intelligence has been applied to a variety of scenarios, providing solutions to problems in different industries. Some computer scientists define artificial intelligence as the study of “intelligent agents”, any means and devices that can perceive their environment and take action that maximizes their chances of achieving their goals. As © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1235–1241, 2021. https://doi.org/10.1007/978-981-33-4572-0_177
1236
Y. Liu et al.
an important application of artificial intelligence, computer vision nowadays has been widely applied in areas of face recognition and detection, human behavior recognition, vehicle violation monitoring, optical character recognition, driverless technology and of transmission lines detection [1]. Generally speaking, the application of computer vision mainly focuses on the recognition of environmental information and semantic information, while the recognition technology of biological information related to human body progresses slowly to some degree. It is a new dimension in the research of machine vision artificial intelligence to perceive and recognize human biological information, such as physiological state, emotion and biological identity, which contributes to making auxiliary judgment and decision. Based on the extraction of human body-related biological information, this paper aims to review on three aspects which are image-based physiological signal acquisition, sleep information extraction and image-based emotion recognition.
2 Image-Based Physiological Signal Acquisition With the improvement of the material life level, people pay more and more attention to the quality of life and health status. Physiological signal acquisition can effectively help people understand their own health status and prevent diseases in advance. After developing invasive and non-invasive contact technology, human physiological signal acquisition technology has gradually entered the era of non-contact signal acquisition [2]. Compared with the traditional human physiological signal acquisition technology, non-contact human body signal acquisition not only allows people to obtain information in a more comfortable and natural state, but also enables long-term continuous monitoring. Currently, non-contact heart rate detection methods mainly include laser Doppler technology [3], microwave Doppler technology [4], thermal imaging technology [5]. The first two monitoring methods are among radar detection, using microwave, laser and other electromagnetic waves with energy as the propagation medium, but the radar equipment is very sensitive to displacement making it difficult to avoid slight movement interference, and long-term microwave and laser irradiation will produce greater radiation to the human body, which is not suitable for long-term signal monitoring. In addition, there is a measurement method based on infrared imaging, which was proposed by the Roberts Institute of Imaging in 1999, but thermal imaging technology requires a specific infrared camera to detect heart rate, and the price of infrared cameras is also relatively expensive, also leading to its unsuitability for daily monitoring. In recent years, non-contact signal extraction technology based on imaging photo plethysmography(IPPG) has received more and more attention. The imaging-based photo plethysmography method is further developed based on the photo plethysmography which was first proposed by Hertzman in 1937 [6]. The principle of IPPG is in that the content of oxygenated hemoglobin in the blood undergoes periodic changes when the heart periodically beats. Different oxyhemoglobin content has different ability to absorb and reflect light, which will cause the skin color to change periodically with the heartbeat, and the video will completely record this change. By analyzing the color
Review on Biologic Information Extraction Based on Computer Technology
1237
change of the image, information such as the heart rate of the human body can be obtained. Volume pulse blood flow contains many important physiological information of the cardiovascular system, such as heart function and blood flow. At the same time, because the volume pulse blood flow mainly exists in micro vessels such as micro arteries and capillaries in peripheral blood vessels, volume pulse blood flow also contains abundant microcirculation physiological and pathological information, which is an important source of information for studying the human circulatory system. In 2007, Sun N et al. [7] proposed the concept of IPPG and applied it to detect skin blood flow and related fluctuations to study the healing of wounds on the skin surface. Pavlidis et al. first proposed the hypothesis of using human faces to extract physiological signals. In 2005, Wieringa F [8] studied the method of extracting pulse wave signals under different wavelength LED illumination through a camera, and revealed the technical prospect of non-contact reflective blood oxygen saturation when the camera took 2D images. In 2013, Sun Y et al. [9] used IPPG technology to perform non-contact measurement of pulse rate, respiratory rate and pulse variability signal (PRV), and statistically compared with contact-type PPG sensors, which provides more powerful support for IPPG technology based on low-cost network cameras. The main aim of remote non-contact PPG (r PPG) is to capture the changes in the blood volume of the human face in the video image. Heart rate was evaluated by obtaining the average intensity of the skin area, and then Fourier or wavelet transform was used to convert the signal to the frequency domain for analysis. First, faces are manually select or automatically detect, the areas of interest in the faces are located, and then different algorithms are applied to extract heart rate signals. For example, RGB colors of different models [10], blind source separation [11] and zooming in. First the face in each frame is detected, and then the average of the three RGB color values from the three time series are extracted. Secondly, independent component analysis (ICA) is used to extract the heart rate signal from the RGB signal. Finally, the signal is converted to the frequency domain for analysis to find the largest peak, which is considered to be the heart rate. From the current research, it can be seen that blind source separation and Eulerian video magnification technology are two more commonly used methods for non-contact video to extract pulse information. Eulerian video magnification is a kind of micro motion magnification technology. The research mainly includes two methods: Lagrangian motion magnification method based on motion optical flow and features and Eulerian video magnification method based on pixel time-frequency analysis]. Compared with the Lagrangian perspective, the Euler perspective does not require manual or other a priori knowledge to determine the particles and accurately track their movement, but assumes that the entire image is changing; only the frequency band of interest needs to be extracted and enhanced. However, compared with the pulse rate, the pulse wave signal itself contains more physiological and pathological information, which is currently ignored by many research institutes that apply Eulerian video magnification technology to extract non-contact video-based pulse information. Based on the principle of photo plethysmography (PPG), there are also many other methods such as signal weighting analysis and supervised learning algorithm, which are gradually applied to non-contact video acquisition of pulse-related physiological signals. To obtain pulse wave information from changes in face color, it is necessary to
1238
Y. Liu et al.
locate and track the face firstly for the face in the video may be shaking or there is background interference information. Traditional target tracking models are mainly divided into two categories. One is the production model which consists of depicting the object itself, regardless of the background information. If the tracked object is blocked or the object itself changes greatly at this time, the tracking object will gradually disappear, making it difficult to achieve the purpose of tracking. The other is the discriminant model whose aim is to train the classifier on the selected samples to achieve the purpose of distinguishing the target from the background. When collecting data, some noise in the surrounding environment will affect the measurement results, so a link to enhance image quality is designed in the process of extracting the pulse signal. Common image noise mainly includes the following four categories of Poisson noise, Gaussian noise, multiplicative noise, salt and pepper noise. There are many methods for denoising, mainly from two aspects, one is the spatial domain denoising, and the other is the frequency domain denoising. The principle of spatial filtering is to move the filter template according to the originally set step size in the image to be processed, and perform data operations in the corresponding area of the image through the filter coefficient and the filter template. Based on this principle, it can be divided into two categories. One is a linear filter, which is mainly suitable for Gaussian noise denoising whose common spatial domain image denoising algorithms include mean filtering and domain averaging. The other is a nonlinear filter whose denoising effect directly depends on the value of the pixel neighborhood, mainly including median filtering, which is often used to eliminate pulse filtering. The commonly used frequency domain filter is mainly wavelet transform.
3 Sleep Information Extraction With the accelerating pace of life among modern human, more and more people are troubled by sleep disorders and related diseases. Therefore, early sleep monitoring, observation of sleep stages, and study of sleep patterns are of great significance to the diagnosis and treatment of sleep-related diseases. In the field of sleep monitoring, polysomnography (PSG) has always been the gold standard for sleep health detection and diagnosis. The use of this technology requires at least 7 physiological signals including electroencephalogram, electrocardiogram, and electrooculogram. However, polysomnography has the disadvantages of excessive professionalism, high cost, and inconvenient operation, which is not conducive to the wide application of sleep monitoring in the medical and non-medical markets. Therefore, the research and application of portable sleep monitoring equipment based on computer and sensor technology has come into being. According to principles and functional characteristics, the realization of portable sleep monitoring can be divided into four methods of sleep monitoring based on EEG signals, sleep monitoring based on autonomous signals, sleep monitoring based on human activity information, and non-invasive sleep monitoring. The first two methods are relatively accurate in sleep monitoring and measurement, and they require that the subject wear electrodes on the brain or chest, which is greatly affected by body movement. The third method does not require the subject to wear electrodes on the
Review on Biologic Information Extraction Based on Computer Technology
1239
brain or chest. The subject only needs to wear a wearable device such as a bracelet while sleeping, but the sleep monitoring accuracy is relatively low. The flexible headband with vertical sensors developed by Zeo Company can record EEG signals and send the signals to the receiver through wireless transmission for data analysis. The device is also embedded with a proprietary neural network model, which is used to analyze and process the data stream. It can quickly identify awake, slight non-rapid eye movement sleep, deep non-rapid eye movement sleep, and rapid eye movement sleep within 30 s. Compared with the data monitored by PSG, the accuracy of Zeo equipment can reach about 75% [12]. The Heally system is a shirt with embedded breath, heart rate sensor and wire electrode, which can monitor the body’s autonomous signals (ECG, jaw EMG, EOG). Compared with sleep monitoring methods based on human activity, it has similar accuracy [13]. The sleep monitoring based on human activity realized by the acceleration sensor is currently popular on the market. Common wristbands are Lark with a micro-vibration reminder, Jawbone, Wakemate and other watch devices that can record limb movements. In addition, noninvasive sleep monitoring usually incorporates mechanical sensors into the mattress or pillow to monitor the heart rate and breathing of the human body, or monitors sleep through methods such as the microphone built in the mobile phone, the radar based on the Doppler Effect, and the optical-based video event. During different sleep stages of the human body, the heart rate and breathing caused by the human body function will change significantly. For example, in the NREM phase, the consistency of the heart rate and respiratory signal is strong, and the waveform changes steadily. At the extreme REM, the consistency of heart rate and respiratory signal is not obvious. Therefore, sleep quality can be analyzed through heart rate and breathing signals.
4 Image-Based Emotion Recognition As artificial intelligence is constantly evolving, and emotion recognition, which is an essential part of human-computer interaction, is receiving more and more attention. Most studies mainly use information such as facial expressions, language, and physiological signals for emotional calculation, including physiological signals including respiration, electromyography, skin current conductivity, pulse wave signal, electrocardiogram and HRV [14, 15], etc. At present, the research on emotion recognition using HRV parameters has gradually matured and has a high accuracy rate. G. Valenza et al. [16] used the images in the International Affective Picture System as emotion-inducing material to induce both positive and negative emotions, collect ECGs of 4 healthy volunteers and calculate their HRV characteristics, where SVM classification was used and short-term (10s) emotion recognition was achieved, and the classification accuracy rate reached 90%. JS Wang et al. [17] identified the driving pressure, by dividing the driving pressure into three levels: low, medium and high collecting the ECG signals of the subjects, and calculating their HRV by means of principal component analysis (Principal Component Analysis, PCA) and linear discriminant analysis (LDA) performing feature selection and classifying by K-Nearest Neighbor (KNN). The final classification accuracy is 97.78%. Karthikeyan P et al. [18] used the Stroop color word experiment as a pressure
1240
Y. Liu et al.
source to collect electrocardiograms of 60 volunteers. Probabilistic Neural Network (PNN) and K-Nearest Neighbor (KNN) were used to recognize two states of stress and calmness in short-term (32s), with an accuracy rate of over 90%. There is still some room for improvement in emotion recognition based on facial image. After extracting the facial image by using facial detection, simple channel separation processing is performed without in-depth analysis of the facial image data. Compared with the heart rate curve extracted later, the facial image data contains more effective information, if filter the face image can be filtered in the early stage, and effective remove of the image noise generated by light and shaking can be, done, further improvement in the accuracy of parameter extraction of emotion recognition based on facial image will be seen. Moreover, it is necessary to study how to perform emotion recognition for multiple people in the same image.
5 Conclusions and Suggestions This study reviews research on biologic information extraction based on computer technology. In order to introduce the deep neural network into the biological information extraction problem, three main problems need to be solved: how to select and construct a deep neural network for biological information classification, and how to complete the deep neural network when the labeled biological information is scarce Training and how data augmentation technology can be reflected in the field of biological information extraction. No matter which field in computer vision research, there are many technical bottlenecks and problems at present. Whether it is traditional thinking or deep learning thinking, there are places and spaces for reference and improvement. More learning and exploration are needed especially in terms of how deep learning can exert its huge potential in these fields, how to perform deep feature extraction, how to solve the problem of insufficient labeled data, how to design and train a corresponding deep model for a new problem, etc. Acknowledgments. This research was supported financially by the China Southern Power Grid (Grant No. GDKJXM20185761 and GDKJXM20200484).
References 1. Zhao, Z., Qi, H., Nie, L., et al.: Research overview on visual detection of transmission lines based on deep learning. Guangdong Electric Power 32(9), 11–23 (2019). (in Chinese). 2. Shelley, K.H., Shelley, S.: Pulse oximeter wave form: photoelectric plethysmography. In: Clinical Monitoring Practical Applications for Anesthesia & Critical, pp. 420–423 (2001) 3. Cosoli, G., Casacanditella, L., Tomasini, E.P., et al.: The non-contact measure of the heart rate variability by laser Doppler vibrometry: comparison with electrocardiography. Meas. Sci. Technol. 27(6), 65–70 (2016) 4. Matsunaga, D., Izumi, S., Kawaguchi, H., et al.: Non-contact instantaneous heart rate monitoring using microwave Doppler sensor and time-frequency domain analysis. In: IEEE, International Conference on Bioinformatics and Bioengineering, p. 172. IEEE (2016)
Review on Biologic Information Extraction Based on Computer Technology
1241
5. Ko, M.: Applications of long range dependence characterization in thermal imaging & heart rate variability. Dissertations & Theses - Gradworks, p. 175 (2015) 6. Hertzman, A.B.: Photoelectric plethysmography of the fingers and toes in man. Exper. Biol. Med. 37(3), 529–534 (1937) 7. Pavlidis, I., Dowdall, J., Sun, N., et al.: Interacting with human physiology. Comput. Vis. Image Underst. 108(1–2), 150–170 (2007) 8. Wieringa, F.P., Mastik, F., Vander-Steen, A.F.W.: Contactless multiple wavelength photo plethysmography imaging: first step towards “SpO.” Pulse Oxigraphy 33(8), 39 (2005) 9. Sun, Y., Hu, S., et al.: Noncontact imaging photo plethysmography to effectively access pulse rate variability. J. Biomed. Opt. 18(6), 061205 (2013) 10. Poh, M.Z., Mcduff, D.J., Picard, R.W.: Non-contact, automated cardiac pulse measurements using video imaging and blind source separation. Opt. Express 18(10), 10762–10774 (2010) 11. Poh, M.Z., Mcduff, D.J., Picard, R.W.: Advancements in noncontact, multiparameter physiological measurements using a webcam. IEEE Trans. Biomed. Eng. 58(1), 7–11 (2011) 12. Shambroom, J.R., Fabregas, S.E., Johnstone, J.: Validation of an automated wireless system to monitor sleep in healthy adults. J. Sleep Res. 21(2), 221–230 (2012) 13. Karlen, W., Mattiussi, C., Floreano, D.: Sleep and wake classification with ECG and respiratory efforts signals. IEEE Trans. Biomed. Circuits Syst. 3(2), 71–78 (2009) 14. Oshvarpour, A., Abbasi, A., Goshvarpour, A.: Fusion of heart rate variability and pulse rate variability for emotion recognition using lagged poincare plots. Australas. Phys. Eng. Sci. Med. 6(4), 385–390 (2017) 15. Ekman, P.: An argument for basic emotions. Cogn. Emot. 6(34), 169–200 (1992) 16. Valenza, G., Citi, L., Lanatà, A., et al.: A nonlinear heartbeat dynamics model approach for personalized emotion recognition. In: International Conference of the IEEE Engineering in Medicine & Biology Society (2013) 17. Wang, J.S., Lin, C.W., Yang, Y.T.C.: A k-nearest-neighbor classifier with heart rate variability feature-based transformation algorithm for driving stress recognition. Neurocomputing 116, 136–143 (2013) 18. Karthikeyan, P., Murugappan, M., Yaacob, S.: Detection of human stress using short-term ECG and HRV signals. J. Mech. Med. Biol. 30(10), 111–115 (2013)
Design of City Image Representation and Communication Based on VR Technology Yan Cui1 and Yinhe Cui2(&) 1
School of Literature and Journalism, Cui-Yan Inner Mongolia University for Nationalities, Tongliao 028043, China 2 Dalian University of Technology, Dalian 116029, China [email protected]
Abstract. This thesis aims to establish a VR representation system for urban images, using different computer computation methods to change the viewing angle or enhance the experience, enhancing the further dissemination of the city’s image among stratified audiences, and combining real and media reproduction to make. The image of the city is reproduced in the media world to infuse views and construct meaning, so that the audience may understand the city from a particular perspective without even realizing it. Keywords: VR technology modern sense Metropolis
City image Communication design The
Since the emergence of the modern metropolis in the 19th century, civilization has been closely linked with the city, and the process of modernization and urbanization of human society is advancing in a certain sense. Modernization and urban construction in the 21st century ushered in a period of high-speed construction. The appearance of new media of communication not only sets up a new platform for the construction of city image but also endows it with new functions. The extensive use of new media makes the city become an “internet celebrity” city. Take Dalian as an example, in the first quarter of 2018, Dalian attracted more than ten million people to visit the romantic city of Dalian via new media short videos. As an important way of presenting future images, VR will certainly become a new platform for the construction of China’s special socialist city images, and the construction of city images based on VR technology, we should not only pay attention to the experience of the new technology but also pay attention to the construction of the communication meaning by the new form of media. Based on the above two points, the theory of “city image” and “media representation theory” is an essential basis for the study of this paper. Its practical application value lies in the establishment of the VR city image construction and representation system, the use of computers with different computing methods to change the viewing angle or enhance the experience, enhance the further dissemination effect of the city image in the hierarchical audience, establish a set of VR city image representation system, improve its module design, from the overall effect to the block experience to establish a clear and clear function of the overall system, the system can start from the government public relations, publicity activities, education and guidance, and other forms and comprehensive use to build the city image. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1242–1246, 2021. https://doi.org/10.1007/978-981-33-4572-0_178
Design of City Image Representation and Communication
1243
This paper aims to establish a VR representation system of city image and to combine real representation with media representation, so that city image can be injected into the media world and its meaning can be constructed, which allows the audience to unknowingly see the city from a certain angle. Communication is always a double connection of symbol and meaning. Different carriers often reproduce the same thing differently [1]. The significance of new media for reappearance to city image communication lies in the final construction of the city image. The city image reappearance by the media is not simply or plane mirror reflection, but media and city, and culture, even with ideology and power. This paper studies the VR virtual representation system not only on the level of real representation but also on the symbolic representation and symbolic representation of the urban image, reappearance of the city image of three aspects [2]. Virtual Reality Technology has three basic features: Immersion, interactivity, and conception. Based on these three features, this paper focuses on building three corresponding visual design modules and based on the inner relationship between immersion and objective reality, the inner relationship between interaction and reality, the inner relationship between conception and subjective reality, on this basis, the study of each specific module and specific dissemination of the meaning of the combination of city image construction. The research will be divided into three modules [3].
1 Landscape Reproduction Module From the visual experience of the brain perception of shaping the city, using the immersion characteristics of VR virtual reality system, a virtual representation of the urban landscape, the introduction of the movie viewing effect, can be in the viewing angle, the way of entry and other aspects of technology and non-conventional design to enhance the experience, strengthen the characteristics of landscape symbols [4]. Unique viewing angles and ways can make the audience have different psychological effects on the city. Dalian, for example, can be entered through the air perspective, this unconventional perspective will give the audience a solid, unknowable psychological implication. The landscape reproduction module mainly completes the city image, the city landform, the building characteristic to carry on the virtual reproduction display through three-dimensional modeling. Its communication effect is mainly the communication of visual symbols. The design of the city image based on VR technology can feel the city space in various motion modes and from specific angles [5]. In this virtual environment, people can feel the city image in motion by walking, driving or even flying, you can also experience the city space from a specific perspective, strengthening the symbolic features that can be visually captured by the city’s image, such as looking down on the city from the city’s vantage point, in the city entrance, the main landscape axis, the main city square feel the city image, the city sign three-dimensional display and so on, the use of VR to the audio-visual impact for the perception of implanting symbols. Visual design firstly determines the landmark type urban landscape architecture with urban characteristics, according to the actual situation, choose the panoramic shot
1244
Y. Cui and Y. Cui
or three-dimensional modeling (Ground Perspective Panoramic Shot Reproduction, air perspective three-dimensional modeling reproduction) mainly from three aspects of visual design: 1), viewing angle is mainly divided into head-up, elevation, overlooking; 2), the movement way mainly divides into the walking, the vehicle movement, the Flight; 3), the sense of hearing assistance mainly divides into the background music, the scene sound design, completes the Urban Building Visual Symbol further enhancement. For example, light rail with local characteristics of Dalian can choose the module combination mode of top-down perspective + flight entry + live sound according to the characteristics of the city image and symbol color to further strengthen. How to set up the modules for the buildings with different symbols, how to judge the concrete standards, the visual presentation emphasis and the logical design are the concrete demonstration contents of the project through experimental research, finally, we get the goal of using the advantages of VR visual presentation to strengthen the symbol of city image [6].
2 The Scene Reproduction Module Through the virtual scene and the humanities life experience further constructs the city image. The design of this module is mainly based on the interactive features of Vr, and the user inspects or manipulates the objects in the virtual environment through natural skills such as their own language, body movements or movements, the dissemination of the city’s humanities and history image can be carried out in the scene reproduction module. At the same time, in a virtual 3D, the receiver can switch between different scenarios in real-time, and then experience different urban human scenes from the same observation point or sequence: such as different historical periods, different seasons, etc. The city image carries the history humanities and the appearance of the amorous feeling to display in the virtual experience way to the feeling person. This part of VR visual design mainly enhances the interactivity in the scene reappearance, the scene module takes the city image characteristic time point as the display core, mainly takes the contrast experience, the scene module takes the story display as the main experience. It can also use the scene module to let the audience experience the “watching story” when VR is presented, and the audience can get it mainly by “watching”. It can also design some interactive links according to the course of the story [7]. Through this kind of immersion experience, city image construction not only is the external sense organ stimulation, it can further stimulate the brain to carry on independent thinking, enhances the city image symbolic significance dissemination [8].
3 Conceptual Reproduction Module Constructing Urban Management Department and Planning Urban Future. The city image is a systems engineering, which is closely related to the long-term planning of the Urban Management Department and represents the future planning of the city in the visual representation of the urban management part of the VR design. This aspect
Design of City Image Representation and Communication
1245
design strengthens the propaganda effect, may support the government decisionmaking, the public participation external dissemination. The real planning effect of VR real-time interaction enables decision-makers, urban construction departments, urban management departments, and the public to better grasp the current situation and future development of cities, and understand the concept of urban construction and planning intention, thus provides the ideal platform for the effective communication between the government and the public, provides the visual material for the city image propaganda and the investment attraction [9]. The three modules of the project design mainly construct the city image through the virtual reality technology from the three aspects of the entity architecture, the humanities history, and the future imagination, the content of the project, the design standard, the modeling method and the dissemination effect of the three modules all need to be designed and solved in the course of the research, the three modules not only introduce them into the city image communication system according to the interests of different groups but also allow the audience to trace the city manager’s overall will to the city and grasp the overall planning of the city image, the specific design of different cities is bound to be attached to the specific planning of different cities, their own characteristics, and the country’s overall policy planning [10]. The research content of this paper mainly combines the construction of city image with the VR representation system and takes the communication theory of media representation as the research framework, clear city image in the process of communication through the new media platform to reproduce the perspective and build meaning. The core of the project research is to set up the reappearance system of the three modules of VR, namely: The landscape reappearance module, the scene reappearance module, the conception reappearance module, and improve the design ideas and relevant standards of the three modules, establishing the complementary function of visual design and communication theory, realizing the mass communication effect of city image through information technology, allowing the audience to use VR virtual representation as an important carrier form of future visual presentation, in the state of nature from a specific angle to understand the image of the city, to build a complete image of the city. The design of the three modules should not only conform to the characteristics of communication such as symbolism and symbolism but also possess the characteristics of VR virtual representation such as immersion, interactivity, and conception, in order to get real and effective results, the way to improve the design of visual module by user experience feedback modeling data is studied.
References 1. Fischhoff, B., Scheufele, D.A.: The science of science communication II. Proc. Natl. Acad. Sci. U.S.A. 111(Suppl 4), 13583–13584 (2014) 2. Al-Aufi, A.S., Fulton, C.: Use of social networking tools for informal scholarly communication in humanities and social sciences disciplines. Procedia Soc. Behav. Sci. 147, 436–445 (2014)
1246
Y. Cui and Y. Cui
3. He, Y., Cui, Y.-H.: Role transition and cross-border integration of the construction of scientific communication in the digital age. In: Proceedings of the 2015 International Conference on Management Science and Engineering. DEStech Publications, Inc. (2015) 4. Hansen, A.: The changing uses of accuracy in science communication. Public Underst. Sci. 25(7), 760–774 (2016) 5. Lupia, A.: Communicating science in politicized environments. PNAS 110(Supplement_3), 14048–14054 (2013) 6. Hesse-Biber, S., Johnson, R.B.: Coming at things differently: future directions of possible engagement with mixed methods research. J. Mix. Methods Res. 7(2), 103–109 (2013) 7. Zhou, Y., Creswell, J.W.: The use of mixed methods by Chinese scholars in East China: a case study. Int. J. Mult. Res. Approaches 6(1), 73–87 (2014) 8. Gao, Q., Tian, Y., Tu, M.: Exploring factors influencing Chinese user’s perceived credibility of health and safety information on weibo. Comput. Hum. Behav. 45(45), 21–31 (2015) 9. Chang, C.M., Hsu, M.H.: Understanding the determinants of users’ subjective well-being in social networking sites: an integration of social capital theory and social presence theory. Behav. Inf. Technol. 35(9), 720–729 (2016) 10. Yoon, Y., Ha, D.: The effect of SNS information quality on determinants of continuous use. Korean J. Hosp. Tour. 25, 46–62 (2016)
Image Approximate Copy Copyright Detection Technology and Algorithm for Network Propagation Under Big Data Condition Yan Cui1 and Yinhe Cui2(&) 1
School of Literature and Journalism, Cui-Yan Inner Mongolia University for Nationalities, Tongliao 028043, China 2 Dalian University of Technology, Dalian 116029, China [email protected]
Abstract. This paper adopts the viewpoint of image copy detection which combines local and global features, gives full play to the matching advantages of different image features, dynamically adjusts the weight of feature fusion according to the expressive ability of different image features and the image content characteristics of different image data sets, calculates the optimal weight allocation value, and proposes the dynamic update weight allocation calculation. This method further improves the accuracy and comprehensiveness of image approximate copy detection. Keywords: Image copy detection algorithm Network propagation
Weight allocation algorithm
According to the statistics of China Internet Information Center, as of June 2018, the number of netizens in China has reached 772 million, and the penetration rate of the Internet is about 55.8%. With the popularity of the Internet, people can easily and quickly obtain multimedia information such as images, videos, audio and so on through the network. The demand for digital media information is increasing year by year [1]. Internet technology brings convenience to everyone, but also brings a series of information security problems. Due to the strong portability, efficiency and rapidity of multimedia information, multimedia works are easy to be illegally copied, tampered with and disseminated. Illegal pirates can easily avoid piracy tracing by tampering with the copies of digital images without affecting the content, which causes huge economic losses to copyright owners and hinders the development of multimedia digital industry. According to relevant estimates, the loss of online video piracy is as high as 20 billion yuan. How to efficiently detect copyrighted image copies from image data sets and protect the copyright of image owners has become a key issue in image processing. In the era of big data, due to the complexity and wide sources of multimedia information, it is more difficult to prevent the arbitrary dissemination of tampered data. If the network supervision center wants to supervise online multimedia data efficiently and accurately, it can not only rely on laws, regulations, artificial supervision and user reporting, but also rely on scientific and technological means. At present, Watermarking and Content-based Copy Detection CCD (Content-based Copy Detection) can be used for copyright protection and piracy tracking of © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1247–1252, 2021. https://doi.org/10.1007/978-981-33-4572-0_179
1248
Y. Cui and Y. Cui
multimedia works. There are many limitations in watermarking technology. Watermarking must be embedded in copyright works before they are published, and once the watermarking is cracked, it loses the effectiveness of copyright protection [2]. Contentbased copy detection technology, CCD, extracts the unique and compact feature information from the original work, and then extracts the feature from the test work by the same algorithm. By comparing the features of the original work and the test work, it can judge whether the test work is a copy of the original work. Because no additional information is needed to be embedded in the media, CCD solves the contradiction between invisibility and robustness in digital watermarking technology, and has strong robustness to common replication attacks. With the increasing popularity of personal media production and network media distribution, CCD has rapidly become a research hotspot in the field of multimedia processing. Since the late 1990s, the research of CCD technology has been widely used in many fields, such as photo collection management, copyright infringement detection and tampered image detection. At present, CCD is still in the stage of development, and the accuracy of detection and real-time detection is still a research hotspot. CCD technology mainly includes two basic steps: feature extraction and feature matching. Generally speaking, the extraction of feature information can be divided into two methods: global information and local information. For image feature extraction, scholars have proposed different detection algorithms, but most of them can only resist some kinds of geometric attacks, while for large-scale rotation, tailoring and hybrid attacks, the detection effect has been poor. On the basis of previous studies, this paper adopts the viewpoint of image copy detection which combines local and global features, gives full play to the matching advantages of different image features, dynamically adjusts the weight of feature fusion according to the expressive ability of different image features and the image content characteristics of different image data sets, calculates the optimal weight allocation value, and proposes the dynamic update weight allocation calculation. This method further improves the accuracy and comprehensiveness of image approximate copy detection [3]. This research will combine the current situation of digital image copyright protection, through in-depth exploration of the related technology of image approximate copy detection, taking into account the characteristics and performance of digital watermarking, image feature extraction and other copyright protection technologies, as well as the cost of implementation, and finally design and develop an image approximate copy copyright detection model for network transmission [4].
1 Main Research Contents Image Copy Detection technology. Image Copy Detection technology has been widely used as an effective means of image copyright protection. At present, there are two main methods of copy detection: Watermarking technology and Content-based Copy Detection technology. Watermarking technology Watermarking technology refers to the specific watermarking information that can mark copyright is inserted into the protected digital multimedia works (before the
Image Approximate Copy Copyright Detection Technology and Algorithm
1249
copyright works are published), but it does not affect the normal use of the original works. After the publication of the copyright works, the copyright ownership is verified by extracting the watermarking information from the copyright works [5]. Content-based Copy Detection Content-based Copy Detection technology extracts content features from images, and then searches for approximate copies of images (approximate copies include image size change, rotation, cropping, contrast transformation, text insertion, noise interference, etc.) in image data-sets based on the extracted features, so as to achieve the purpose of copyright protection [6]. There are two kinds of image feature extraction methods: one is based on global image features and the other is based on local image features. Global characteristics Global feature extraction: By analyzing the color attributes of image pixels and the relationship between them, a series of digital features can be obtained to describe the content of the image. It has the characteristics of simple calculation, anti-signal attack, high matching efficiency and fast feature extraction. In this study, the global feature extraction method based on average brightness order measure is adopted: by dividing the image into blocks, the average gray value sequence of all blocks is taken as the global feature [7]. Local features The Image Copy Detection method based on local features transforms the problem of Image Copy Detection into the detection, description and retrieval of local regions. SIFT is a reflection of the gradient prominence relationship in the local area of the image. It is invariant to rotation and scale scaling, and has good robustness to brightness change, noise, filtering, affine transformation, perspective transformation and so on. It is recognized as the most stable feature matching algorithm at present [8]. Key technology From the existing research results, the research of Image Copy Detection technology mainly focuses on feature vector extraction, index construction and feature matching. Feature extraction: There are limitations in using a single global or local feature in image copy detection. (that is, global feature algorithm is simple, fast and robustness to changes in image illumination noise and resolution, but it is sensitive to changes in image rotation and displacement; while local feature algorithm is complex, slow in calculation, robustness to changes in image rotation and displacement, but sensitive to noise.) Because of the complementarity between local features and global features, an image feature extraction algorithm based on fusion of local features and global features is proposed in this paper. Local features are extracted by SIFT feature extraction method, while global features are extracted by block brightness sequence method, which avoids the problem of poor robustness of single feature. Feature index construction: There are a large number of network images, and the feature vectors extracted from the images are high-dimensional. In order to improve the speed of retrieval feature library and meet the current demand of approximate copy detection, it is necessary to build an index for image feature library. This paper adopts multiple inverted index technology. By constructing a three-dimensional inverted index framework, each dimension corresponds to an image feature, and the three features are assigned by multiple methods. Feature matching: In Image Copy Detection, feature
1250
Y. Cui and Y. Cui
matching (similarity measure) is the ultimate determinant of copy relationship. The selected feature matching method will affect the accuracy of detection results and the performance of the whole approximate copy detection system. Because a single image feature can only reflect part of the content characteristics of the image, it has the disadvantages of limited matching ability and low retrieval accuracy. In this paper, image feature matching is carried out by fusing multiple image features. According to the expressive ability of different image features and the image content characteristics of different image data sets, the weight of feature fusion is dynamically adjusted, and a dynamic update weight allocation algorithm is proposed. This method can significantly enhance the comprehensive detection performance of image detection system in feature matching. Two modules Image Copy Fast Detection Module In this paper, an image copy detection module based on SIFT local feature and average brightness sequence measurement is proposed. Firstly, the regions whose content remains unchanged are demarcated by matching SIFT features. Then, the regions whose content remains unchanged are divided into blocks, and the average gray ranking sequence of all blocks is taken as image features. Finally, we can quickly determine whether a copy is exists or not through the feature [9]. Dynamic update of adaptive weight allocation module. In order to give full play to the matching advantages of different image features and reasonably allocate the fusion weights of different image features, an adaptive weight allocation algorithm with dynamic updating is proposed in this paper. In feature fusion, the algorithm can automatically adjust the weight of feature fusion according to the expressive ability of different image features and the image content characteristics of different image data sets, and calculate the optimal weight distribution value, so as to further improve the accuracy and comprehensiveness of Image Copy Detection. Based on the existing research results, this paper focuses on watermarking, image feature extraction and feature matching technology in the field of image copy detection, and intends to break through the following key points: Algorithm improvement Fast image copy detection algorithm based on SIFT feature points. Firstly, the twodimensional position information of SIFT feature points is extracted. By calculating the distance and angle between each feature point and the center of the image, the number of feature points in each interval is counted in blocks, First. The two-dimensional position information of SIFT feature points is extracted, and the number of feature points in each interval is calculated by calculating the distance and angle between each feature point and the center of the image, and then the binary hash sequence is generated according to the quantitative relationship to form the first-order robustness feature. Secondly, according to the one-dimensional directional distribution characteristics of feature points, the number of feature points in each direction sub-interval is counted, and the second-level image features are constructed according to the quantitative relationship. Finally, the cascade filtering framework is used to determine whether the image is copied or not. The algorithm greatly reduces the initial search range, improves the speed of image processing and reduces the amount of computation.
Image Approximate Copy Copyright Detection Technology and Algorithm
1251
A new checking method for spatial relations of Strong Geometry Consistency (SGC) is proposed. This method makes full use of the scale, direction and location information of local feature points to check the spatial relationship of matched feature points. Before detection, the local feature points are grouped. The speed of spatial relation detection is greatly improved by multi-group detection simultaneously, so that the local feature points of mismatching can be filtered more accurately. Difficulties to solve: The algorithm of image copy detection is very complex. Because images are often attacked by signal processing (chromaticity change, sharpening, etc.) and geometric attacks (cutting, rotation and scaling) in the process of propagation, the attacked images are quite different from the original images, and the extracted image features are generally complex [10]. In image copy detection, the complexity of feature matching is relatively high and the amount of computation is relatively large. How to detect approximate copied images quickly and accurately has always been a hot and difficult problem in the field of computer vision and pattern recognition. This research is based on solving the problem of approximate copy copyright detection technology for network transmission. In this study, image feature extraction algorithm based on local and global features is used, and multi-inverted index technology is used to construct feature index, and image retrieval based on multi-image features is implemented. In order to give full play to the matching advantages of different image features and reasonably distribute the fusion weights of each image feature, a dynamic update adaptive weight allocation module and a fast detection module of approximate image are designed and developed in this study, which further improves the accuracy and comprehensiveness of image approximate copy detection. At the same time, the key technologies involved in the model are described, summarized and summarized in detail in this paper. The research emphasis is determined, and the corresponding theoretical analysis and explanation are given. Then the appropriate algorithm model is derived from the theoretical analysis results, and the practical application is verified. This study determines the research focus, gives the corresponding theoretical analysis and explanation, and then uses the theoretical analysis results to derive the appropriate algorithm model for practical application verification. The main line of the research is to put forward problems - theoretical research - model design - application validation. Research ideas. This paper summarizes the current research status of content-based image copy detection technology at home and abroad, and makes a detailed analysis of the image feature extraction technology, feature index construction technology and feature matching methods commonly used in content-based image copy detection. On the basis of summarizing these technologies, this study designs a fast detection module for approximate image copy and an adaptive weight allocation module for dynamic updating. The above ideas need to be verified by research and practice to judge the correctness.
1252
Y. Cui and Y. Cui
References 1. Wang, Z.: Study on Digital Story Telling (New Media Research Frontier). Communication University of China Press, Beijing (2018) 2. Zhou, R., Fang, K.: Research on the innovation model of books and publications under the supermedia narrative. Sci. Technol. Publ. (2015) 3. Wang, H.: Application of augmented reality technology in publishing. Publ. Print. (3) (2017) 4. Wang, Y.: Interactive narrative structure in E-book. Publ. J. (4) (2018) 5. Xu, L., Zeng, L.: Digital storytelling and interactive digital narrative. Publ. J. (3) (2016) 6. Beck, K.: Kommunikationswissenschaft [Communication], vol. 2964. UTB (2016) 7. Chaffee, S.H., Metzger, M.J.: The end of mass communication? Mass Commun. Soc. 4(4), 365–379 (2001) 8. Friesen, N.: Media Transatlantic: Developments in Media and Communication Studies Between North American and German-speaking Europe. Springer, Cham (2016) 9. Wu, J.: Private and Public in Social Network Sites: Digital Diversity and Similarity between Germany and China in a Globalized World. Peter Lang, Frankfurt am Main (2017) 10. Habermas, J.: Communication and the Evolution of Society. Wiley, Hoboken (2015)
Food Safety Traceability Technology Based on Block Chain Miao Hao1,2, Heng Tao1(&), Wei Huang1,2, Chengmei Zhang1,2, and Bing Yang2 1
Guizhou Provincial Key Laboratory of Public Big Data, Guiyang 550001, Guizhou, China [email protected] 2 Guizhou Academy of Sciences, Guiyang 550001, Guizhou, China
Abstract. With the emergence and widespread application of intelligent technologies such as the Internet of Things and big data, many researchers have also begun to combine emerging technologies with food traceability systems. These studies have made innovations in data query and reading platforms or related encryption technologies. However, in traditional food traceability systems, data management information is still scattered, difficult to share, and difficult to trace. The purpose of this article is to study the food safety traceability technology based on blockchain technology. Based on the theoretical basis of blockchain technology, this paper proposes a traceability scheme formed by the combination of reverse search and recursive algorithm. Finally, this paper designs and implements a food safety traceability system. The experimental results show that the scheme proposed in this paper is more efficient than traditional traceability models. And by testing the performance of the system, the test results show that the system can meet the demand. In this paper, through testing the traceability rate of different schemes, the traceability rate of the scheme proposed in this paper is stable at about 1000 ms. Keywords: Blockchain technology Food safety traceability Data security Encryption technology
1 Introduction Under the influence of the rapid development of the global economy, people’s lifestyle has been changed, people’s living standards have been continuously improved, and consumers’ demand for green and pollution-free food has been increasing. Consumers not only pursue the taste and nutritional value of food, but also attach more importance to food safety [1, 2]. However, in recent years, there have been reports on food safety or food safety problems in news reports, causing panic among consumers [3]. Although there are many current food traceability methods, their implementation efficiency is not high, and they cannot be widely used, and the effect is not very ideal [4]. Currently, centralized traceability methods commonly used also have many drawbacks, such as information cannot be shared among enterprises, little traceable information, opaque
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1253–1259, 2021. https://doi.org/10.1007/978-981-33-4572-0_180
1254
M. Hao et al.
data, data in the hands of core enterprises is easy to be tampered with, and traceability methods of all enterprises lack unified management and planning [5, 6]. Blockchain itself has such technical characteristics as decentralized data, multiparty consensus, non-tampering, traceability and distributed ledger, which can solve the problem of data trust to a large extent, carry out information endorsement from the data source, and effectively reduce artificial data operation errors or tampering [7, 8]. It enables all participants to jointly maintain the validity of the data, and consumers can check the commodity source information recorded on the block chain in the corresponding form, thus solving the trust problem in the traditional traceability approach [9]. Block chain data have the characteristics of weak decentralized or centralized, each participation can within the scope of authorization check their related information in the supply chain, so as to break the information island, enhance the speed of information interaction between the members of the supply chain, from the perspective of data management and use, helps to improve the whole efficiency of supply chain [10]. Aiming at the problem of high cost and low efficiency of traditional tracing mode, combined with the theoretical basis of block chain technology, this paper proposes a tracing scheme combining reverse search and recursive algorithm. This paper also designs and implements the food safety traceability system. The experimental results show that the proposed scheme is more efficient than the traditional one. And through the performance test of the system, the test results show that the system can meet the requirements.
2 Design of Food Safety Traceability Scheme Based on Block Chain Technology 2.1
Block Chain Technology
(1) Asymmetric encryption algorithm After receiving the ciphertext, user 2 decrypts the information with its private key to obtain plaintext, which ensures the security of data transmission. The process of signing and verifying data using asymmetric encryption algorithm can prove the identity of both sides of the transaction and ensure the correctness and security of information transmission. (2) Consensus algorithm The Proof of Stake algorithm is an improved consensus algorithm, which can well solve the waste of power resources and computing power resources in PoW. The difficulty of generating block of proof of stake algorithm can be expressed by formula (1): HðB; tÞ Bal T
ð1Þ
Among them, Bal represents the number of nodes’ tokens, T represents the difficulty target, and H represents the difficulty of block generation.
Food Safety Traceability Technology Based on Block Chain
1255
(3) Intelligent contract Intelligent contracts are computer programs that process data either actively or passively, and are at the heart of blockchain technology. After being signed by each node in the system, the intelligent contract is attached to the block chain in the form of program and transmitted to other nodes in the system through P2P network. After verification, the nodes record the intelligent contract in the specific block of the block chain. 2.2
Traceability System Based on Reverse Recursion
When the system from near to far search block matching chain to the first traces of information, access to only a single link of food traceability information, is not a complete traceability information, so you also need to use in the process of receiving deposit before the “on a deal to address” recursive algorithm, recursive until no link that is food for the flow of information, the extra layers of return to joining together to get the complete information traceability information as a result back, and then by tracing module will back results returned to the display module, a recursive formula is as follows: recursionðaddress nowÞ ¼ recursionðaddress prevÞ þ '' '' þ toDateðtimestampÞ þ ''='' þ ascii2nativeðuserAddressÞ
ð2Þ
Among them, recursion() represents the recursive algorithm, the incoming parameter is the address of the transaction, address_now represents the address of the current transaction, address_prev represents the address of the previous transaction, toDate() represents the function of converting the time stamp to the normal time format, and ascii2native() represents the conversion of the ASCLL code into Locally coded function; result display module displays traceability results.
3 Experimental Design of Food Traceability System 3.1
Data Collection
The experiment will use Geth simulation to build three block chains for experimental comparison, gradually add simulated trading data to the block chain until 9800 pieces, and insert 200 pieces for each experiment data record. In the simulated data, the last transaction of the same food is set to be about 200 transactions away from the current one, and the total number of links in the supply chain of the same food experience is no more than 10. 3.2
Experimental Environment
The system development environment of this paper is shown in Table 1.
1256
M. Hao et al. Table 1. System development environment Development environment Blockchain development client Blockchain development kit Auxiliary database System framework Operating system CPU RAM
3.3
Parameter Geth Web3 MySQL Express Windows10 Core i7 8G
System Architecture
Display layer, mainly includes functional selection page, login/registration page, trace page and link page. The business logic layer mainly includes userRoute providing user management function, traceRoute providing food tracing function and insertRoute providing data link function. The network layer & data layer mainly includes MySQL, which provides access to food trading address information in the block chain, and Ethereum blockchain network, which provides storage and search of food trading data.
4 Traceability System Validation Results Analysis 4.1
Analysis and Discussion of Experimental Results
(1) Analysis of comparison results of tracing rate Traceability rates are very easy to measure on local nodes because the functions that perform these operations are usually expressed in code. The traceability processing time is equal to the time required to update the status database. The comparison results of tracing rate are shown in Table 2 and Fig. 1. Table 2. Comparison results of tracing rate Trading node Existing retrospective model Traceability system based on reverse recursion
200 2000
1800 5000
3400 6000
5000 10000
6600 12000
8200 17000
9800 20000
1020
1050
1500
980
900
950
930
Food Safety Traceability Technology Based on Block Chain
1257
Tracing time(ms)
25000 20000 15000 10000 5000 0 200
1800
3400 5000 6600 8200 Train node Existing retrospective model Traceability system based on reverse recursion
9800
Fig. 1. Comparison of traceability results
As can be seen from the comparison diagram of tracing rates of various schemes in Fig. 1, with the increase of transactions in the block chain, the tracing time of existing schemes also increases significantly, so the tracing rate decreases. In contrast, the food safety traceability scheme proposed in this paper basically stabilizes the traceability time around 1000 ms, because the traceability scheme proposed in this paper cannot directly return the result after the latest result is traced, and it also has a recursive process. (2) System performance test This paper uses LoadRunner to test the performance of the system. By creating virtual users, it monitors the performance of the system in real time in a highconcurrency and real load environment. Analyze test report to find system performance problem and optimize system performance. The following is a test of the system. Create virtual user script open LoadRunner virtual user generator to create automatic performance test script. The value of URLAddress is entered into the address of this system. During recording, actions are divided into corresponding actions. After doing this, use the LoadRunner controller to create a real load environment. To configure the global run parameter, Start()Vusers sets the total number of users to be loaded to 20 and executes twice every 15 s. The Duration parameter is the length of time that the virtual user created continues to run in the system. The StopVusers parameter is the number of users to stop during the set time. If the user fails to run or does not run, it will be displayed as Down; if the user all runs successfully, it will be displayed as Passed. The system performance test results are shown in Fig. 2. As can be seen from Fig. 2, through the above operation, 200 virtual users were created, the total number of users logged in within 10 min was 197, the login success rate was 99.92%, the average response time of the system was 2.02 s, and
1258
M. Hao et al.
250
test value
200 150 100 50 0 response time(s)
login success rate(%)
business success rate(%)
Total system login
CPU usage(%)
Memory usage(%)
test content Actual value Expected value Fig. 2. System performance test results
the utilization rate of CPU and memory was less than the target value. It can be seen that the system performance designed in this paper can meet the expected goal. 4.2
Suggestions to Promote Optimization of Food Safety Traceability System Based on Block Chain Technology
(1) Improve performance Currently, the TPS of the mainstream blockchain technology platform only reaches hundreds of magnitude, which means that the application based on the existing blockchain technology platform will be subject to the performance of the blockchain technology platform itself. Applying it to a production environment with a lot of concurrency requires more work on performance. Therefore, the current traceability system is limited by the performance limitations of the blockchain technology platform, and there are still some obstacles in actual use. Only by further optimizing the performance of the blockchain technology platform can the traceability system based on the blockchain technology and other applications be given full play. (2) Increase the authenticity guarantee mechanism of data entry The traceability scheme designed in this paper can only ensure that once the data is modified in violation of the design principles and requirements after the data entry system, it can be judged by the designed data integrity verification mechanism. No more mechanisms have been designed to guarantee the provision of false data from the data source. Therefore, in future work, more attention should be paid to how to ensure the complete authenticity of data before or after data entry, so as to prevent fraud. The realization of the authenticity of the input data
Food Safety Traceability Technology Based on Block Chain
1259
from the source and the realization of the absolute real data entry will bring a higher level of data security for the whole traceability quality.
5 Conclusions Blockchain technology is naturally suitable for application in the field of traceability due to its characteristics of distribution, decentralization, detrust and non-tampering of data. At present, many companies and research institutes at home and abroad are carrying out research in this field. According to the previous development routes of new technologies, it is foreseeable that mature food safety traceability system based on block chain technology and other applications with block chain technology advantages will be inevitable in the near future. The research work done in this paper is in the early stage of the development of blockchain technology and subject to the technical characteristics of the current blockchain platform, so there is still a lot of room for improvement in performance. In addition, the way and scheme of the combination of blockchain technology and food traceability designed in this paper is not the only application way of blockchain technology in this field. For different traceability scenarios and different production processes, there will be subtle differences. Different combination methods and schemes can bring different traceability effects. Acknowledgements. This work was supported by Major Scientific and Technological Special Project of Guizhou Province (20183002) and Project of Guizhou Academy of Sciences Zi (201903).
References 1. Hu, Y.: Current status and future development proposal for Chinese agricultural product quality and safety traceability. Strateg. Study Chin. Acad. Eng. 20(2), 57–62 (2018) 2. Zhang, A., Mankad, A., Ariyawardana, A.: Establishing confidence in food safety: is traceability a solution in consumers’ eyes? J. Consum. Prot. Food Saf. 15(2), 99–107 (2020) 3. Hao, Z., Mao, D., Zhang, B., et al.: A novel visual analysis method of food safety risk traceability based on blockchain. Int. J. Environ. Res. Public Health 17(7), 2300 (2020) 4. Yinghua, S., Ningzhou, S., Dan, L., et al.: Evolutionary game and intelligent simulation of food safety information disclosure oriented to traceability system. J. Intell. Fuzzy Syst. 35 (3), 2657–2665 (2018) 5. Crews, J.: Food safety fingerprinting. Meat Poult. 64(1), 80–88 (2018) 6. Iftekhar, A., Cui, X., Hassan, M., et al.: Application of blockchain and Internet of Things to ensure tamper-proof data availability for food safety. J. Food Qual. 2020(6), 14 (2020) 7. Hao, J.T., Sun, Y., Luo, H.: A safe and efficient storage scheme based on blockchain and IPFS for agricultural products tracking. J. Comput. 29(6), 158–167 (2018) 8. Demetrakakes, P.: Safety blockchain, mark Twain-style. Food Process. 79(12), 12 (2018) 9. Wu, M., Wang, K., Cai, X., et al.: A comprehensive survey of blockchain: from theory to IoT applications and beyond. IEEE Internet Things J. 6(5), 8114–8154 (2019) 10. Huffstutler, K.: Blockchain for the Beef chain. Natl. Provisioner 233(1), 84–87 (2019)
Research on Data Acquisition and Transmission Based on Remote Monitoring System of New Energy Vehicles Yuefeng Lei(&) and Xiufen Li School of Hyundai Auto, Rizhao Polytechnic, Rizhao 276826, Shandong, China [email protected]
Abstract. Electric vehicle is one of the typical products of new renewable energy technology. It can be predicted that new energy vehicles will completely replace fuel powered vehicles in the future. In order to speed up the mature process of new energy vehicles, this paper proposes a new concept of data acquisition and transmission system based on remote monitoring system of new energy vehicles. Using the outstanding achievements of human in the electrical field, a complete set of vehicle condition data monitoring system is established, and the performance optimization of new energy vehicles is realized by combining with automatic control technology. At the same time, a large number of effective parameters can be obtained with the support of the monitoring system. Through the analysis, we can find that the research in this paper can accelerate the development of new energy vehicle technology and industry. Keywords: Sustainable development New energy vehicles monitoring Data acquisition and transmission
Remote
1 Introduction In recent years, the burning of traditional fossil fuels (such as oil, coal, etc.) has caused serious ecological environmental pollution [1], and the crisis of non renewable energy is becoming more and more serious, so the exploration and development of green new energy with high conversion efficiency and less environmental pollution has become a hot spot in the world [2]. The research and application of new renewable clean energy can effectively solve the problems of serious energy consumption and environmental pollution. However, with its rapid development, it also faces many problems and challenges. So how to use the existing mature technology to establish a complete system to help the research and development and application of new clean energy [3], accelerate the transformation process of its technology to mature products, so as to make it serve the human society as soon as possible and carry out environmental friendly economic development [4]. Since the birth of new energy vehicles, there has been the concept of remote monitoring system. As early as 2009, the Ministry of industry and information technology of China promulgated the regulation that new energy vehicles in the initial stage and early stage of development must carry out remote monitoring in proportion [5]. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1260–1265, 2021. https://doi.org/10.1007/978-981-33-4572-0_181
Research on Data Acquisition and Transmission
1261
And in 2011, the four ministries and commissions jointly made a further emphasis on remote monitoring of new energy vehicles [6]. At the same time, countries have increased the research on renewable energy based on the concept of sustainable development [7]. In recent years, the rise of electric vehicles is a typical representative of new energy applications, and the application of new technologies has to go through the process of continuous improvement from R & D to application [8, 9]. If we take the new energy vehicle as the application object, establish a set of system with the function of parameter detection and automatic optimization and adjustment through certain feedback parameters, which will greatly help the development of new energy vehicles [10].
2 Characteristics of Remote Monitoring System and Data Fusion Algorithm 2.1
System Characteristics
Reliability is the ability of a system or product to complete the specified functions under specified conditions and within a specified time. It is commonly used to describe quantitative indicators, including reliability, failure rate and mean time between failure (MTBF). For repairable systems, MTBF hours are usually expressed, which depends on the structure of the system and the reliability of hardware and software. The following formulas are commonly used: MTBF ¼
T ðt Þ r
ð1Þ
Where t (T) is the working time of the system (H); R is the cumulative number of failures during the system operation; for the remote monitoring system of new energy vehicles, MTBF is required to be no less than 6 103 h. Real time is the ability of the system to respond in a limited time. For this system, it mainly refers to the response time of data transmission, which is the sum of data acquisition time, data transmission time, data processing time of sender and receiver. For the new energy vehicle remote monitoring system, the response time of remote control should be less than 5 s, the response time of remote query should be less than 15 s, and the response time under abnormal conditions should be less than 30 s. 2.2
Data Fusion Algorithm
The algorithms used in multisensory data fusion mainly include classical reasoning and statistical methods, Bayesian estimation, clustering analysis, arithmetic mean and recursive fusion estimation, adaptive weighted fusion estimation, D-S evidential reasoning, wavelet transform, entropy estimation, fuzzy set theory and artificial neural network. Different fusion algorithms have their own advantages and disadvantages, which are suitable for different application background and different fusion levels. The algorithm is described as follows:
1262
Y. Lei and X. Li
Calculate the mean and variance of n parameters of the same kind 1 Xn x i¼1 i n rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 Xn x0 ¼ ð x x0 Þ 2 i¼1 i n1 x0 ¼
1i ¼
1 1 þ jxi x0 j
1 xi ¼ Pn i
i¼1 1i
ð2Þ ð3Þ ð4Þ ð5Þ
Results of fusion algorithm: x¼
Xn i¼1
xi 1i
ð6Þ
3 Existing Transmission Modes and Their Advantages and Disadvantages There are many kinds of wireless data transmission systems and methods. Through the relevant instructions and data sets, the actual test of the transmission system is carried out, and the relevant technical parameter manuals or materials are searched. Combined with the investigation and analysis of the corresponding system cost, the most suitable data transmission system based on the remote monitoring system of new energy vehicles is selected. The following Table 1 shows the advantages and disadvantages of the four communication transmission modes obtained by looking up the relevant technical manuals. It can be seen from Table 1 that the transmission distance of the radio station is too short to be used as a transmission system, and GSM is not suitable for transmission system because of its poor reliability. Table 1. Advantages and disadvantages of four communication modes Radio station GSM GPRS CDMA
The transmission distance is 100 km, so long-distance transmission cannot be realized It can realize global communication, and can’t send and receive data in two directions at the same time, so the delay is large It can realize global communication with good stability It can realize global communication, and the communication performance is better than GSM/GPRS
Research on Data Acquisition and Transmission
1263
4 System Design 4.1
Analysis Result 7000 6000
COST
5000 4000 3000 2000 1000 0
Cost
radio station
GSM
GPRS
Wireless trunki ng mobile com munication
CDMA
2000
600
600
6000
1200
Fig. 1. Cost of different data transmission systems.
As can be seen from Fig. 1, the cost of wireless trunked mobile communication system is far higher than that of other systems; the cost of GSM/GPRS communication module is relatively low; and the cost of radio and CDMA system is about 1–2 times higher than that of GSM/GPRS. According to the cost consideration, the transmission system is selected in radio, GSM/GPRS/CDMA. The above analysis shows that radio and GSM are not suitable for transmission system. Then compare GPRS with CDMA, as shown in Fig. 2.
GPRS
CDMA
45% 55%
Fig. 2. Signal relative coverage of GPRS and CDMA
1264
Y. Lei and X. Li
As can be seen from Fig. 2, from the perspective of signal coverage, the signal coverage of GPRS is slightly higher than that of CDMA, and the cost of CDMA module is twice that of GPRS. Therefore, considering the cost, transmission distance, stability, signal coverage and other factors, GPRS is selected as the data transmission channel of the system. 4.2
System Workflow
The working process of the system is described as follows: the on-board system is responsible for collecting the running parameters and positioning information of the vehicle in real time, and then carries out two-way data exchange with the data server through GPRS network. The server provides data processing, analysis, storage and other services. Users can access the data server and carry out various operations through Ethernet. 4.3
System Architecture
(1) Physical layer: Taking the actual vehicle as the object, different sensor data collection devices are designed according to the battery voltage, line current, battery temperature, tire pressure, vehicle speed, brake and other physical parameters. In the automotive internal electrical control system, there is a very mature technology to realize data communication between various systems: can bus communication technology. Since its development, after continuous improvement and improvement, it has become an information exchange technology with the best information transfer function between various electronic control systems and sensors in the automotive system. Therefore, can bus communication technology is used to complete the function of data collected by sensors and transmitted to vehicle terminal for information storage. (2) Network transport layer: Through the above analysis, we choose GPRS module to realize the data communication and exchange between vehicle terminal and server. Considering the requirements of transmission distance and real-time monitoring between electric vehicles and monitoring center, we need to expand wireless communication products, namely GPRS communication module, to realize seamless connection with monitoring center network by using existing communication network and global transmission control protocols such as TCP/IP and UDP. When the data acquisition hardware obtains the vehicle data information, the data is sent to the data server through GPRS module with embedded TCP/IP protocol. Users can obtain the required data by logging in to the Internet through the monitoring software. (3) Application layer: The data analysis and processing ability of vehicle terminal is very limited. Based on the Internet application concept of Internet, it is necessary to establish a server group with powerful computing function and sufficient capacity as the core
Research on Data Acquisition and Transmission
1265
component of the application layer. Depending on the powerful computing power of the server group, the data uploaded from the vehicle terminal can be further processed and analyzed.
5 Conclusions Under the good application prospect of new energy technology, this paper puts forward a data acquisition and transmission system based on the remote monitoring system of new energy vehicles, which can help new energy vehicles mature rapidly and ensure traffic safety. In this system, the vehicle terminal realizes data acquisition, simple processing and automatic optimization of vehicle condition according to the feedback data from the server. The system transmits data with the server through GPRS network, and the server is the complex data processing unit of the whole system.
References 1. Awasthi, A.K., Zeng, X., Li, J.: Environmental pollution of electronic waste recycling in India: a critical review. Environ. Pollut. 211, 259–270 (2016) 2. Jebli, M.B., Youssef, S.B., Ozturk, I.: Testing environmental Kuznets curve hypothesis: the role of renewable and non-renewable energy consumption and trade in OECD countries. Ecol. Ind. 60, 824–831 (2016) 3. Steffen, B., Matsuo, T., Steinemann, D., et al.: Opening new markets for clean energy: the role of project developers in the global diffusion of renewable energy technologies. Bus. Polit. 20, 1–35 (2018) 4. Mikus, P., Wyzga, B., Radecki-Pawlik, A., et al.: Environment-friendly reduction of flood risk and infrastructure damage in a mountain river: case study of the Czarny Dunajec. Geomorphology 272, 43–54 (2016) 5. Spanò, E., Pascoli, S.D., Iannaccone, G.: Low-power wearable ECG monitoring system for multiple-patient remote monitoring. IEEE Sens. J. 16(13), 5452–5462 (2016) 6. Liu, Z., Hao, H., Cheng, X., et al.: Critical issues of energy efficient and new energy vehicles development in China. Energy Policy 115, 92–97 (2018) 7. Hak, T., Janouskova, S., Moldan, B.: Sustainable development goals: a need for relevant indicators. Ecol. Ind. 60, 565–573 (2016) 8. Xu, J., Jiang, J., Xu, N., et al.: A new energy index for evaluating the tendency of rockburst and its engineering application. Eng. Geol. 230, 46–54 (2017) 9. Saito, M., Osawa, M., Masuya, A., et al.: Stabilization of Si negative electrode by Li predoping technique and the application to a new energy storage system. Electrochemistry 85 (10), 656–659 (2017) 10. Li, N., Zhao, C., Chen, L.: Connecting automatic generation control and economic dispatch from an optimization view. IEEE Trans. Control Netw. Syst. 3(3), 254–264 (2016)
Computer Network Security Based on GABP Neural Network Algorithm Haijun Huang(&) School of Intelligent Systems Science and Engineering, Yunnan Technology and Business University, Kunming 657100, Yunnan, China [email protected] Abstract. The wide application of computer network in various fields has brought opportunities for social development and production technology. At the same time, computer network security has become an important part of the construction of information society, more and more attention has been paid by the whole society. The existing computer network security evaluation methods have many limitations, such as poor usability, limited application scope, easy to be disturbed and so on. Based on the neural network method, the computer network security evaluation method is studied. By analyzing the principle of computer network security assessment and BP neural network learning algorithm, the GA-BP computer network security evaluation model is established by combining genetic algorithm with BP algorithm. In this paper, the input node of the four-layer neural network is set as 268, and the output node is set as 4. The results show that the average error of GA-BP network is obviously smaller than that of BP network, and the running time of GA-BP network is shorter than that of BP network. It overcomes the shortcomings of slow convergence speed and long training time of BP network, and effectively improves the accuracy and efficiency of computer network security assessment. Keywords: GA-BP neural network Genetic algorithm Computer Network security
1 Introduction In the Internet age [1, 2], the explosive development of computer technology the progress of technology brings us convenience, [3, 4]. At the same time, due to the complex relationship between these attacks, the traditional security assessment method has been unable to effectively evaluate, and the accuracy of the final evaluation results is very low. At present, there is no complete and efficient computer network security evaluation system, and the security management of all kinds of information data is not in place. Therefore, in today’s network environment is very complex and changeable, we must immediately establish an excellent network security evaluation system, and on this basis, complete the analysis and prediction of future network security problems. At present, the evaluation and prediction of network security is just beginning. Understand the security risk assessment, understand the development law of current network security incidents, and formulate effective security assessment management methods © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1266–1272, 2021. https://doi.org/10.1007/978-981-33-4572-0_182
Computer Network Security Based on GABP Neural Network Algorithm
1267
according to the actual development of the network. And on this basis, the risk of comprehensive and efficient analysis and prediction, do a good job in prevention, and even predict the signs of risk. At present, the most commonly used network security assessment method is risk assessment method [7]. Special software is used for security evaluation. This software can detect the loopholes in the network, so as to evaluate and even predict the security of the whole system. Network security is a very complex system problem, involving many interdisciplinary and fields. Therefore, the existing safety assessment methods have some problems, such as poor operability, small scope of application, human factors interference and so on. In order to get a better method of computer network security evaluation, this paper studies the improvement of BP neural network by GA [8, 9], and designs a GABP neural network model [10] to evaluate the network security based on the evaluation principle of computer network security, and compares it with the common BP network model, and finds that the improved GABP neural network model can have It is a computer network security evaluation method with obvious advantages.
2 Application of Neural Network in Computer Network Security 2.1
Network Security
Network security is not a simple engineering problem, it is a complex system problem integrating multiple disciplines, which requires research from the aspects of network basic technology, network management and even law. The traditional network security research mostly focuses on a specific problem. For example, virus suppression is basically dependent on code analysis. After the emergence of viruses, the implementation of antivirus software update in the prevention of unknown viruses is almost blank. Intrusion and attack detection also basically realize the corresponding strategies for specific systems, which can be said to have achieved certain results, but it cannot fundamentally solve the problem and is in the state of passive defense. In view of this, some scholars began to propose the transformation of network security model. 2.2
BP Neural Network Algorithm
BP neural network algorithm is a neural network composed of several interconnected neural units. It mimics the biological nervous system’s response to things in real life. When using this neural network model, we need to train the network with a large amount of data, and then we can use the algorithm to solve practical problems after training. The common training method of BP neural network algorithm is as follows: firstly, the parameters of BP neural network model should be set before learning to ensure that it is in the original state. This setting is random. There are different layers in the network. There will be some connection relations before these layers, which are represented by corresponding parameters. These parameter values are randomly assigned
1268
H. Huang
in the range of [−1, 1]. After that, some necessary parameters should be set, including error function, maximum learning time and learning accuracy. The kth input sample and the corresponding expected output are selected randomly: d0 ðkÞ ¼ d1 ðkÞ; d2 ðkÞ; ; dq ðkÞ
ð1Þ
x0 ðkÞ ¼ ðx1 ðkÞ; x2 ðkÞ; ; xn ðkÞÞ
ð2Þ
According to the calculation results of the above formula, the global error is calculated: m X 1 X ðd0 ðkÞ y0 ðkÞÞ2 2m k¼0 o¼1 q
E¼
ð3Þ
Verify the error and judge whether it meets the planning requirements. When the error is reduced to the planned range, or the calculation amount has reached the specified value, the algorithm can be stopped. Otherwise, the operation of the second round will be turned on.
3 Experimental Correlation Analysis 3.1
Experimental Background
Network security can be scanned and evaluated by some common software tools. However, for some complex computer network applications, it is necessary to find a security evaluation method with strong operability, wide application range and small human interference, so as to meet the needs of rapid development of computer network security control. Neural networks have this potential. Now neural network technology is a mature technology, and its application field is also expanding. Neural network has the advantages of strong adaptability, fault tolerance, simulation and robustness, which makes it more and more widely used in computer network security assessment, and has achieved good results. However, the traditional BP algorithm still has some limitations, which leads to the calculation accuracy is not high enough and the calculation time is too long. 3.2
Experimental Design
This paper combines GA-BP algorithm with BP algorithm based on genetic algorithm to overcome the above shortcomings of BP network. GA BP algorithm can carry out global search and avoid premature convergence. Genetic algorithm can converge to the global optimal solution and has strong robustness. The combination of neural network and genetic algorithm gives full play to the nonlinear mapping ability of neural network, with fast convergence speed and strong learning ability. The computer network security evaluation model adopts four layer neural network, the input node is set as 268, the output node is set as 4, and the average error and
Computer Network Security Based on GABP Neural Network Algorithm
1269
running time are used as the evaluation criteria of the evaluation results, and the effects of GA-BP network and BP neural network are compared. The experimental results are shown in Table 1. Table 1. Experimental results The evaluation index GABP BP Average error 0.008 0.025 Running time 5.6 s 12.1 s
4 Discussion 4.1
Network Security Evaluation and Overall Prediction
The comprehensive evaluation and prediction of network security is mainly based on the average model of historical increment. The comprehensive situation assessment of network security is shown in Fig. 1. This figure mainly records the overall security trend of network topology from March 8, 2020 to March 12, 2020. Through the prediction of the model, the comprehensive network security on March 13, 2020 is also predicted. By carefully studying the relevant information in the diagram, it is not difficult to find that the comprehensive situation value of the network has been maintained at a higher position in this period compared with the comprehensive overall security situation value. The security of the whole network is increasing gradually.
25
Value
20 15 10 5 0 8/3/2020
9/3/2020
10/3/2020 11/3/2020 12/3/2020 13/3/2020 Date Overall situation
Fig. 1. Network security comprehensive situation assessment chart
1270
4.2
H. Huang
Comparison Between GABP Algorithm and Other Models
In addition to the neural network method, several commonly used safety assessment models are also studied, such as restricted Boltzmann machine, depth feedforward network model, convolution neural network model and long-term memory neural network model. The results show that the evaluation accuracy of this method is lower than that of the evaluation model based on genetic neural network. The specific results are shown in Fig. 2.
Test sample
ACCURACY
Training sample
RBM
DFF
CNN
LSTM
GABP
MODEL Fig. 2. Comparison of prediction structure between GABP model and other models
For the test samples with overall prediction accuracy, the prediction accuracy of GABP model is 16.51% higher than that of Boltzmann machine, 12.09% higher than that of deep feed-forward network model, 9.92% higher than convolution neural network model, and 12.7% higher than long-term and long-term memory neural network models, indicating that GABP model is better than the other four models. Through the above comparison, we can see that the network security evaluation model based on this algorithm is superior to the existing models in the aspects of sample training, risk prediction, control and so on. 4.3
Review of Network Security Evaluation
Computer network security evaluation, also known as network risk assessment, This kind of assessment is oriented to the whole network system, including the information transmitted by the network system itself, as well as the various equipment carrying the network system. The network risk assessment is to evaluate these objects, so as to detect and eliminate various factors that have adverse effects on them, because these factors may endanger the security of the network. The evaluation of network security is
Computer Network Security Based on GABP Neural Network Algorithm
1271
the basis for implementing additional security control and further reducing security risks. On the whole, the main principles and requirements of information security are strengthened, and the information security is fully deployed. Among them, information security risk assessment is one of the important basic work of information security. 4.4
Advantages of GABP Neural Network Algorithm in Network Security Evaluation
Genetic algorithm is a very special optimization algorithm. The reason why genetic algorithm is chosen to evaluate computer network security is mainly because it has the following advantages: (1) The goal of genetic algorithm is coding, which is obtained from decision variables. This working mode is very different from the traditional optimization algorithm, because the genetic algorithm does not use the value of decision variables as the goal of calculation, and the traditional algorithm is like this. This method of using the coding of decision variables for optimization calculation is a bionic algorithm. It refers to the concept and principle of genetics in life science. The principle of algorithm follows the mechanism of evolution, which is why genetic algorithm is called. (2) The search method of genetic algorithm is very direct. It does not need the derivative of the objective function or other parameters as an auxiliary, which is a big difference from the traditional optimization algorithm, because genetic algorithm does not need to rely on these to determine the direction of retrieval. In addition to using the objective function value, the genetic algorithm only uses some transformed objective functions to determine its retrieval direction and range. Without the derivative value of objective function and other auxiliary information, it is very difficult or impossible to obtain many derivatives of objective function, or optimization function problem, derivative does not exist, and combinatorial optimization problem. The application of genetic algorithm avoids the obstacle of derivative function and makes calculation more convenient. In addition, the search efficiency of genetic algorithm is very high, a large part of the reason is that it can select the search range, only search the range with good adaptability. (3) Genetic algorithm can simultaneously carry out multiple and multi-directional retrieval. Unlike those traditional algorithms, their optimization calculation can only be a single point, that is, searching one by one in the whole search range. Needless to say, such a search method is certainly not as efficient as the multi-point search genetic algorithm. Moreover, sometimes genetic algorithm can even use this multi-point synchronous search method to find the local optimal solution of the process limiter. The operation of selection, difference and change of the group produces a new generation of groups. These groups contain a lot of group information, which can avoid searching some unnecessary points, which is equivalent to searching more points. This is a special implicit parallelism of genetic algorithm. (4) Genetic algorithm is an algorithm based on the possibility of adoption. This is very different from other optimization algorithms. The mode and path of these algorithms are unchangeable. The fixed retrieval mode makes it impossible to search all possible individuals. This limitation results in the limited application value of these algorithms. The genetic algorithm has high variability of various operations, no matter when screening, changing or mixing, because these actions are carried out in the way of probability, which means that its retrieval method will not be too limited. Although this
1272
H. Huang
algorithm based on probability can not guarantee that every element can meet the conditions well, or even some poor elements appear, but after a round of evolution, it can always evolve a very good set of elements. After long-term application, it is found that if genetic algorithm can be used in the final probability problem, the best answer to the problem to be solved can be obtained. From another point of view, genetic algorithm has a high stability, which enables it to maintain the reliability of the results in the case of parameter changes, which is not achieved by other algorithms. As an ideal tool to improve the network, genetic algorithm can result in network security evaluation, which is easy to fall into local minimum and slow convergence speed.
5 Conclusions This paper aiming at the shortcomings of existing computer network security evaluation methods, a network security evaluation model based on GABP neural network algorithm is established. Compared with standard BP model and other neural network models, it is found that the evaluation accuracy and efficiency are significantly improved, the overall security of the network is evaluated and predicted. The results show that the application of GABP algorithm in computer network security evaluation can obtain good application effect.
References 1. Fan, J.L., Wang, J.X., Li, F., et al.: Energy demand and greenhouse gas emissions of urban passenger transport in the Internet era: a case study of Beijing. J. Clean. Prod. 165, 177–189 (2017) 2. Greenhow, S., Hackett, S., Jones, C., et al.: Adoptive family experiences of post-adoption contact in an Internet era. Child Fam. Soc. Work 22(S1), 44–52 (2017) 3. Jacelon, C.S., Gibbs, M.A., Ridgway, J.V.: Computer technology for self-management: a scoping review. J. Clin. Nurs. 25(9–10), 1179 (2016) 4. David, B.: Computer technology and probable job destructions in Japan: an evaluation. J. Jpn. Int. Econ. 43, 77–87 (2017) 5. Li, Y., Hua, N., Song, Y., et al.: Fast lightpath hopping enabled by time synchronization for optical network security. IEEE Commun. Lett. 20(1), 101–104 (2016) 6. Liyanage, M., Abro, A.B., Ylianttila, M., et al.: Opportunities and challenges of softwaredefined mobile networks in network security. IEEE Secur. Priv. 14(4), 34–44 (2016) 7. Wang, W., Yang, C., Dong, C., et al.: The design and implementation of risk assessment model for hazard installations based on AHP–FCE method: a case study of Nansi Lake Basin. Ecol. Inform. 36, 162–171 (2016) 8. Yoshitomi, Y., Ikenoue, H., Takeba, T., et al.: Genetic algorithm in uncertain environments for solving stochastic programming problem. J. Oper. Res. Soc. Japan 43(2), 266–290 (2017) 9. Ma, L., Wang, B., Yan, S., et al.: Temperature error correction based on BP neural network in meteorological wireless sensor network. Int. J. Sens. Netw. 23(4), 265 (2016) 10. Liang, Y.J., Ren, C., Wang, H.Y., et al.: Research on soil moisture inversion method based on GA-BP neural network model. Int. J. Remote Sens. 40(5–6), 2087–2103 (2019)
An Optimization Method for Blockchain Electronic Transaction Queries Based on Indexing Technology Liyong Wan1,2(&) 1
2
College of Artificial Intelligence, Nanchang Institute of Science and Technology, Nanchang, China [email protected] School of Software, Jiangxi Normal University, Nanchang, China
Abstract. Due to the advantages of non-tampering, non-forgery and anonymity of blockchain, it is gradually being used in electronic trading systems. However, there is a problem of low efficiency of querying historical information in the traditional blockchain electronic transaction system, which cannot meet the user’s basic query needs. To better solve this problem, we propose an indexbased blockchain electric transaction query scheme in this paper. We construct the index directory of the BKV (B-Key-Value) Tree storage structure by modifying the storage of the B-tree, and based on the BKV tree, we store the transaction order number and the corresponding block number in form of keyvalue pairs by combining the characteristics of the blockchain system. At the same time, we also design the blockchain structure and query algorithm based on the BKV index directory. Theory and experiments show that the index-based blockchain electrical transaction query scheme can reduce the electrical transaction query time, thus effectively improve the efficiency of the electrical transaction query and improve the user’ experience with the blockchain electronic trading system. Keywords: Blockchain transaction
Index technology Query optimization Electronic
1 Introduction The blockchain concept was first introduced by Satoshi Nakamoto in 2008 [1]. Blockchain technology is an encrypted distributed database with the characteristics of tamper resistance and weak centralization. The transaction data between users is stored in the blockchain in the form of ciphertext, which increases the security and makes it more authentic. However, in the current research on electronic transactions in blockchain, it is found that the chain table structure of blockchain causes a reduction in the efficiency of querying transactions, has a high time complexity [2, 3]. It is difficult to use blockchain technology to develop Internet systems and applications with practical application value. In order to solve the problem of low query efficiency caused by the structure of the blockchain structure, some experts have proposed a centralized database transfer solution. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1273–1281, 2021. https://doi.org/10.1007/978-981-33-4572-0_183
1274
L. Wan
Xu Y, Zhao S, Kong L et al. [4] proposed an educational certificate blockchain (ECBC) that can support low latency and high throughput and provide a method for accelerating queries. ECBC has built a tree structure (MPT-Chain), which can not only provide effective queries for transactions, but also support historical transaction queries of accounts. MPT-Chain only shortens the update time of the account, and can speed up the efficiency of block verification. Zhang L, Qinwei L I, Qiu L et al. [5] studied the technical principles and application advantages of blockchain and proposed a MC + NSC model and Hop-Trace application method based on the problem of poverty alleviation. The model divides the data into three blockchains: user chain, commodity chain and action chain. Under this model, two types of data storage are also formed: document storage and database storage. A hierarchical storage and collaborative query method for tracking data has also been formed, called Hop-Trace, to improve the query efficiency of system applications. Cai Weide [6–8] and others proposed the dual chain model of account blockchain (ABC) and trading blockchain (TBC). Although this method achieves the goal of query optimization by improving structure of the blockchain, the structure is too complicated, which brings difficulties to practical applications and increases maintenance costs. From the results of the current research, there are a number of experts who use a centralized database dumping solution. That is, the data in the blockchain is traversed and synchronized in real time to the intermediate database Oracle or MySQL [9]. This allows the user to query the centralized database to get the desired data stored on the blockchain when they query it. However, there are two problems of the risk of data tampering and low development efficiency [10, 11]. First, due to use of a centralized database dump form, it can provide an opportunity for criminals to tamper with the data and directly modify data in the intermediate database, resulting in inconsistencies between the data in the blockchain and the data queried by the customer, which will lead to other and other related security risks. Second, the addition of an intermediate database makes the overall architecture of the blockchain system more complex. In this paper, we mainly focus on the problem of low efficiency of querying historical transaction information of the blockchain electronic trading system. We propose a BKV tree storage structure as an index by modifying the storage structure of the B tree, and design the blockchain structure based on the index directory to build a blockchain electronic transaction query optimization scheme.
2 Construction of Index Directory 2.1
Construction Based on BKV Tree Index Directory
From the characteristics of B-tree, we can see that the traditional B-tree structure can only find the corresponding keywords in the query process, which cannot meet the needs of this research. The main work in this subsection is to design an indexing
An Optimization Method for Blockchain Electronic Transaction Queries
1275
structure, which can finding block numbers stored by exchanges based on transaction order numbers in the blockchain. We make some improvements on the keyword storage structure of B-tree, from storing multiple keywords in one node to storing multiple key-value pairs in one node, called BKV-tree. Here, the key of the key-value pair refers to the transaction number of blockchain, and the value refers to corresponding exchange in the A block number, e.g. (0x20, 2) represents a transaction in block 2 with transaction number 0x20. The improved key comparison in the BKV tree is based on the size of numerical value of the key. When the searched transaction number is found, the corresponding block number can be obtained. The specific structure is shown in Fig. 1.
2
2
(0x10,5), (0x70,1)
(0x20,2), (0x70,1)
1
(0x60,2), (0x70,1)
1
(0x80,1), ( )
Fig. 1. BKV tree structure
2.2
BuildBKV Storage Algorithm
According to the index storage structure BKV tree designed in this paper, the method of storing the block number and transaction number in the BKV tree. Because the blockchain system is different from other systems, not all transactions are reasonable and legal. It is necessary for the accounting node to verify each transaction and generate a block after passing the consensus mechanism. In this way, it can be confirmed that the transaction in the generated block is successful, and only the transaction within the block is a reasonable and legal transaction. Therefore, like other systems, it cannot store transactions at once or store a transaction. Instead, it needs to store the newly generated transaction number hash value and the block number where it is stored in the BKV tree while generating blocks. The specific storage algorithm Build BKV is shown in Algorithm 1.
1276
L. Wan
Algorithm 1. BuildBKV ( ) Input: Electronic transaction information info; Output: Transaction number and BKV index; (1) BEGIN (2) error=Check (info); (3) if (! error) (4) BuildTransTree ( ); (5) else (6) return info; (7) m=CurrentBlock ( ); (8) Blocknumber=n+1; (9) TrnsHash [ ] =QueryTransTrees ( ); (10) BKV=CurrentBKV ( ); (11) for (i=0; i thread process data packet in protocol stack > notify user layer > user layer receive data packet > Network Layer > Logic Layer > business layer. DPDK network layer packet flow: hardware interrupt > abort interrupt > user layer receive packet through device mapping > user layer protocol stack > Network Layer > Logic Layer > business layer. Compared with the traditional kernel based network data process, DPDK has made a great breakthrough in the network data flow from the kernel layer to the user layer [8]. The compilation, connection and loading methods of DPDK are the same as those of ordinary programs and it is an ordinary user mode process. Advantages: general purpose processor achieves high performance through hardware architecture specific optimization, while DPDK uses general processor to achieve high performance through optimized specialized underlying software. Compared with system optimization and algorithm optimization, implementation optimization is often less mentioned. DPDK more significantly embodies the concept of implementation optimization, developers in-depth understanding of the processor architecture, methods around cache. DPDK makes full use of CPU, chip, PCIe, network card and other characteristics of the platform, uses multi-core parallel computing technology, and makes targeted optimization according to the characteristics of network load, so as to give full play to the maximum capacity of the general platform in the special field. The results show that the thinner the critical region is the better the concurrency interference is and the less collision is the better. As far as possible localization and locking, the pursuit of throughput with the increase of the number of cores and linear growth. Disadvantages: top-level design of DPDK makes full use of the feature of hardware architecture, but the specific design and detailed design of secondary module level are not perfect, there are more places not optimized in code implementation. DPDK scheduling model RTC and pipeline are rough and need fine tuning in practice. DPDK provides a framework platform and library functions for high-performance message processing, rather than the specific business processing of messages. The task of performance tuning in the project implementation of communication equipment is both concrete and arduous. Improper processing of arbitrary code in the data plane may lead to performance degradation. A thorough understanding of the essence of DPDK acceleration technology is a prerequisite. At the same time, the development of DPDK is difficult and the development cycle is long. At present the throughput of DPDK is less than 10 Gbps and the price of SDK is expensive.
3 Summarize of PF_RING Framework Cloud computing has four deployment models: public cloud, private cloud community cloud and hybrid cloud, each has unique functions to meet different requirements of users. The public cloud has the lowest cost, the private cloud has the highest security, the high flexibility of the hybrid cloud can meet the ever-changing user needs and the community cloud has a higher purpose. For colleges and universities, which kind of deployment method to choose requires a comprehensive evaluation of the school’s own development strategy, business needs and other factor [9]. PF_Ring is a new network socket technology based on Linux kernel, which is invented by Luca deri. Can improve
Application and Analysis of Three Common High-Performance Network
1293
the efficiency of data packet processing and provide auxiliary application programs and patches. It can also quickly capture data packets and carry out network traffic analysis and packet filtering, so as to significantly improve the packet capture speed. PF_Ring takes full advantage of the device polling mechanism to open up a new channel to transfer packets from the network card to the user and then reduce the kernel overhead. In short, it is a high-speed packet capture library, through which the personal computer can be turned into a cheap and effective network measurement toolbox, analyze and operate data packets and traffic, and support calling user level API to create more effective applications. PF_Ring has own high-speed packet capture library with complete development interface, which is similar to Libpcap, but its performance is better than Libpcap. PF_Ring’s core solution is to reduce the number of copies in the transmission process. The optimization of data copy is different under different technologies. PF_Ring proposes a new packet capture socket model based on polling mechanism, which is a new socket pf based on ring buffer_Ring, create a PF_Ring socket allocates a ring buffer, which is released when the socket ends. PF_Ring socket is bound to a network card, which is in the system state before the end of the socket. When the data packet reaches the network card, the data packet is put into the ring buffer. If the buffer is full, the packet is discarded. User can directly access the data in this ring buffer. When a new packet arrives, it can directly cover the packet space that has been read in user space. PF_Ring ZC implements PF_Ringtmdna (direct NIC access direct network card access) technology is a method of mapping network card memory and registers to user mode. The network processing unit of the network card completes DMA transmission without any additional packet replication, thus saving a data copy operation. CPU cycle is only used to operate data packets, not to remove data packets from the network card. It doesn’t need to allocate and release memory, so as to improve CPU performance. PF_Ring is a high-performance packet capture mechanism, which provides local packet image analysis and realizes network monitoring and auditing. The traditional intermediate network node can only parse the data packet layer by layer according to the protocol stack level. The router is a three-layer device, switch is a layer-2 device. The firewall is divided into two-layer firewall and three-layer firewall. Use PF_Ring device, it can directly from the network card chip DMA to the memory of the machine, through the application program rather than the kernel protocol stack to process the packet, PF_Ring mechanism subverts the way in which intermediate nodes interpret packets [10]. Compared with the serial solution of protocol stack, PF is used_Ring is more efficient and flexible. It has a multi-core processor, which can process the information of each layer in user mode in parallel. Advantages: used in Linux kernel 2.6.18 or above, PF version 4. X_Ring can be directly applied to the kernel without patching the kernel. Kernel based packet capture and sampling, PF_Ring driver can accelerate packet capture and support hardware packet filtering of 10 GB commercial network adapter such as accolade, exaplaze, endace, fiberblaze, inventech, mellanox, myricom/CSPI, napatech, netscope and Intel (ZC). It is recommended to use the network card supporting NaPi (Intel network card) to obtain the best performance, independent of the device driver, and provide lipcap support for seamless integration with pcap based applications. Specify the packet header to filter into BPF for content checking. Only packets that meet the load filter can
1294
G. Zhu and W. Kang
pass through. Working in hybrid mode, all packets of network card are captured, and plug-ins are used to enhance packet parsing and content filtering. CPU and network card binding, users directly interact with the receiving ring and sending ring of the network card. The maintenance of network card register data is more stable in the kernel module system. User mode only needs to call IOCTL function to complete the data packet sending and receiving. It is easy to use. It can be used to decouple the network card driver without hardware modification. Disadvantages: only one application can open DMA ring at a certain time, while the network card has multiple Rx/TX queues. Rus one application on each queue causes multiple applications in user mode, communicate with each other to distribute packets. PF_Ring needs to modify the network card driver and maintain multiple network card drivers. The performance of ring may not reach the effect of DPDK. DPDK discards irrelevant packets, while PF_Ring contract requires DNA technology, which requires a separate cost. If DNA is not used, the contract will go through the protocol stack, DNA function and common PF_Ring reduces one copy of memory and does not use DNA function PF_Ring did not optimize the network.
4 Summarize of Netmap Framework Netmap is an I/O framework developed by Luigi Rizzo to efficiently send and receive original packets. It’s integrated into FreeBSD, can be compiled or used under Linux, including kernel modules and user state function library [11]. There is a packet pool in the kernel. When the data arrives at the network card, a packet is taken from the packet pool directly. The ring memory space driven by the NIC is mapped to the user space by the MMAP technology [12]. The data is put into the packet, and the descriptor of the packet is put into the receiving ring. There is a receiving and sending queue of the network card in memory mapping area to avoid the secondary copy communication between the kernel and the user mode. The address in ring will be sent through the receive buffer. It does not need to apply for sending and receiving ring packets dynamically. The user program saves the data packets by using the pre allocated fixed buff, and finally through the netmap_If get receive send ring netmap_Ring, the same code can be used for single queue or multi queue network card to batch process packets, obtain and send packets, and reduce the original system calls and dynamic allocation of the kernel [13]. Advantages: a high-performance system can be built by a ring and a process corresponding to a CPU core to realize parallel in the system. It does not modify the existing operating system software and does not need special hardware support. The data packet bypasses the operating system kernel process, and the user space program sends and receives data packets directly to the network card. The packet sending rate can reach 14.88 mbps, and the packet receiving rate is similar to that of the packet sending [14]. It supports multi network card queue and realizes high-performance packet transfer between user mode and network card. Disadvantages: netmap needs driver support from the network card manufacturer’s approval scheme, which is more like multi system call to realize user receiving and contract. Function is to primitive and still relies on interrupt notification mechanism.
Application and Analysis of Three Common High-Performance Network
1295
No reliable network development framework has been formed, and the technical bottleneck has not been completely solved [15]. CPU affinity is poor, multi-core jitter is not accurate, hardware memory barrier leads to inaccurate program execution, fixed CPU frequency, frequency reduction and Intel frequency acceleration are prohibited, and frequency change brings inaccuracy.
5 Conclusion and Expectation DPDK, PF_Compared with ring and netmap, DPDK has the best performance, does not need multiprocessor cooperation, meets the zero copy batch process of all user mode under the network card multi queue and multi-core framework, follow open source BSD license, joins the Linux fund project, has many developers and is the most widely used_Ring ranks second in terms of performance, which can be used for batch process of network card multi queue and multi-core framework. When using EULA license, it needs to apply for a license to each port or MAC address, which has poor security netmap performance is the third, it can only batch network card queues, with higher security than PF_RING. How to improve the performance of the equipment, how much space to optimize, whether it is worth spending more time and money to continue to study, these questions are difficult to find the answer. We can only use spiral verification such as analysis, speculation, prototyping, running data, and reanalysis to slowly approach the optimal solution. Practice is the touchstone and the light of the road ahead. Only by quantitative calculation based on the theory can we define and optimize the target.
References 1. Pak, J.: A high-performance implementation of an IoT system using DPDK. 8(4) (2018). https://doi.org/10.3390/app8040550 2. Begin, T., Baynat, B., Gallardo, G.A., et al.: An accurate and efficient modeling framework for the performance evaluation of DPDK-based virtual switches. IEEE Trans. Netw. Serv. Manage. 15(4), 1407–1421 (2018) 3. Halfhill, T.R.: Broadwell accelerates the DPDK. Microprocess. Rep. 30(7), 19–22 (2016) 4. Moharir, M., Johar, D., Bhardwaj, D.: An experimental review on Intel DPDK L2 forwarding. Int. J. Appl. Eng. Res. 12(18 Pt.5), 7833–7837 (2017) 5. Li, G., Zhang, D., Li, Y., et al.: Toward energy-efficiency optimization of Pktgen-DPDK for green network testbeds. China Commun. 15(11), 199–207 (2018) 6. Alizadeh, R., Belanger, N., Savaria, Y., et al.: DPDK and MKL; enabling technologies for near deterministic cloud-based signal processing. In: 2015 IEEE 13th International New Circuits and Systems Conference: 2015 IEEE 13th International New Circuits and Systems Conference (NEWCAS), Grenoble, France, 7–10 June 2015, pp. 1–4 (2015) 7. Gallenmuller, S., Emmerich, P., Wohlfart, F., et al.: Comparison of frameworks for highperformance packet IO. In: Eleventh 2015 ACM/IEEE Symposium on Architectures for Networking and Communications Systems: Eleventh 2015 ACM/IEEE Symposium on Architectures for Networking and Communications Systems (ANCS 2013), Oakland, CA, USA, 7–8 May 2015, pp. 29–38 (2015)
1296
G. Zhu and W. Kang
8. Stoev, S.A., Michailidis, G., Bhattacharya, S., et al.: AMON: an open source architecture for online monitoring, statistical analysis, and forensics of multi-gigabit streams. IEEE J. Sel. Areas Commun. 34(6), 1834–1848 (2016) 9. Du, J., Liu, P.: Design and implementation of efficient one-way isolation system based on PF_RING. In: 2012 Fourth International Conference on Multimedia Information Networking and Security [v.1], pp. 105–108 (2012) 10. Zabala, L., Pineda, A., Ferro, A., et al.: Comparing network traffic probes based on commodity hardware. In: The Thirteenth International Conference on Networks: ICN 2014, Nice, France, 23–27 February 2014, pp. 261–267 (2014) 11. Redzovic, H., Vesovic, M., Smiljanic, A., et al.: Energy-efficient network processing based on netmap framework. Electron. Lett. 53(10), 407–409 (2017) 12. Garzarella, S., Lettieri, G., Rizzo, L.: Virtual device passthrough for high speed VM networking. In: Eleventh 2015 ACM/IEEE Symposium on Architectures for Networking and Communications Systems: Eleventh 2015 ACM/IEEE Symposium on Architectures for Networking and Communications Systems (ANCS 2013), Oakland, CA, USA, 7–8 May 2015, pp. 99–110 (2015) 13. Mikkelsen, L.M., Thomsen, S.R., Pedersen, M.S., et al.: NetMap - creating a map of application layer QoS metrics of mobile networks using crowd sourcing. In: Internet of Things, Smart Spaces, and Next Generation Networks and Systems: 14th International Conference, NEW2AN 2014, and 7th Conference, ruSMART 2014, Proceedings, St. Petersburg, Russia, 27–29 August 2014, pp. 544–555 (2014) 14. Rizzo, L.: Portable packet processing modules for OS kernels. IEEE Netw. Mag. Comput. Commun. 28(2), 6–11 (2014) 15. Casoni, M., Grazia, C.A., Patriciello, N.: On the performance of Linux container with Netmap/VALE for networks virtualization. In: 2013 19th IEEE International Conference on Networks: 2013 19th IEEE International Conference on Networks (ICON), Singapore, Singapore, 11–13 December 2013, pp. 1–6 (2013)
New Experience of Interaction Between Virtual Reality Technology and Traditional Handicraft Jiang Pu(&) Sichuan Fine Arts Institute, Chongqing 401331, China [email protected]
Abstract. This paper studies the application of virtual reality technology in the field of traditional handicraft, analyzes the value of virtual reality technology in the protection, inheritance and development of traditional handicraft, discusses the integration of virtual reality technology and traditional handicraft, and explores the experiential value and the prospect brought by virtual reality technology to traditional handicraft in future. Keywords: Virtual reality technology Experience
Traditional handicraft Interaction
1 Introduction Virtual reality technology is an important practical technology in the 21st century. It is the product of the intersection and integration of many related disciplines, and is a new technology that integrates the achievements of computer graphics, man-machine interface technology, sensor technology and artificial intelligence technology etc., With the characteristics of existence, multi-sensory, interactivity and so on, it has super strong simulation system and real human-computer interaction. With the development of science and technology, virtual reality technology has made great progress and gradually become a new field of science and technology [1–3]. 1.1
The Status and Development of Traditional Handicraft
Traditional handicraft is the unique cultural value of manual labor and has become an important part of cultural inheritance, cultural creativity and protection of intangible cultural heritage. However, with a revolutionary technological upgrading, the traditional handicraft has been marginalized continuously, today it has become a “heritage”. With the vigorous development of the new era, the protection of traditional crafts and respect for rival crafts will usher in a new historical period for traditional crafts, which develops harmoniously with the economy, society and culture of the Chinese era. Traditional handicraft will inevitably be combined with science and technology, digital processing, artificial intelligence and so on, which will give birth to more personalized and artistic handicrafts. Following the steps of digital technology and network information, with the advantages of big data, network, virtual reality technology, artificial intelligence and other platforms in database, integration, interactive experience, sharing © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1297–1302, 2021. https://doi.org/10.1007/978-981-33-4572-0_186
1298
J. Pu
management and other functions, it provides a new world for the protection, inheritance and contemporary development of traditional handicrafts [4–6]. 1.2
The Relationship Between Virtual Reality Technology and Design Art
In the era of rapid changes in science and technology, big data, virtual reality technology and artificial intelligence have more and more influence and penetration on real life, and has brought unprecedented prospects and challenges to each field. Among them, virtual reality technology makes full use of three-dimensional graphics generation, multi-sensor interaction, multimedia, artificial intelligence, human machine interface, high-resolution display and other high-tech to simulate the real world. Art studies has also entered a new field and realized the transformation from text to threedimensional virtual appearance. In the development of traditional handicraft to contemporary handicraft, science and technology have also started to think, try and explore to promote the new form and experience of handicraft with the times progress [7–10].
2 New Experience of Traditional Handicraft in Virtual Reality Technology The inheritance and development of traditional handicraft is the core source to promote the continuous growth of contemporary handcraft, as well as the guarantee of national arts and crafts and cultural traditions. At present, in the process of development, the handicrafts are facing the challenges and difficulties of the times, and many crafts and skills cannot be continued and retained. However, these are precious heritage resources that record and carry the cultural memory and living conditions of a specific historical period. In addition to the continuation of culture and the re-creation of contemporary design, the skills and techniques are combined with science and technology to promote new cognition and new experience, so that the value core of handicraft and individual life will get respected and returned with the autonomy, openness and simulation of virtual reality technology. 2.1
Culture Representation and Flow Experience
Virtual reality is a kind of technology, carrier and medium. In the characteristics of virtual reality, flow and interaction are very obvious characteristics. Flow is a natural feeling, while interaction pursues a variety of senses and perception systems, which forms an interactive experience between individuals, experiencers and the environment (things) through the interaction between people and the environment or things. Its technology can be widely used in historical relics, cultural settings that no longer exist in reality, or the destination scene that consumers are not easy to reach, so as to reappearance and restore. Such as handicrafts in museums, archaeology, popular science, tourism and heritage resources. Through the virtual space and field, visitors can immerse themselves in the virtual world, experience the perfect combination of traditional manual and modern technology, and the virtual reality technology can get a
New Experience of Interaction Between Virtual Reality Technology
1299
new experience in virtual reality by integrating all human perception functions, such as hearing, vision, touch, taste, smell and other perception systems. At present, the application of virtual reality technology in the exhibition of Arts and Crafts Museum, exhibition hall and handicraft art museum has been fully valued. The remote display makes it possible for the audience to appreciate the handicrafts of museums and galleries at home. The three-dimensional web pages made by these galleries provide a 360° view of the arts and crafts collection, which made the virtual reality technology realize human-computer interaction for museum collection and cultural heritage and build a strong simulation system. Therefore, the virtual reality technology can promote the craft culture, explore the origin of the craft, sort out the inheritance, continue the craft concept, and tell the history and cultural stories of the craft inheritance, so as to improve the breadth and depth of the traditional handicraft cognition, and broaden the spatial cognition and objective nature of the handicraft and culture. 2.2
Reappearance and Deep Understanding of Craft Skills
The virtual laboratory and training place for traditional crafts could be built by using the virtual reality equipment or interactive control professional systems and software, and different crafts and skill modules could be set up according to the classification of materials, which can respectively form craft flow experience area, dynamic demonstration experience area and interactive practice experience area. For example, the craft flow experience area can build a display platform for process categories. On the basis of digital display of traditional crafts, the geometric modeling, image modeling, image and geometry could combine with the relevant tools, materials, equipment, etc. of crafts to realize the accurate known of traditional crafts in process features, appearance, color, modeling, etc., In the dynamic demonstration experience area, the process demonstration is carried out according to process data and modeling, and we could experience the process operation precautions, scenario simulation, step process, etc. At the same time, it can also experience the interact with practical operation to realize the interaction of teaching, step simulation, skill trial and so on. In the process of virtual and real experience by giving full play to the advantages of virtual reality technology, three-dimensional visual experience of the appearance, structure, color and technological process of various materials can be achieved through its technology and three-dimensional flow virtual display, which can not only effectively promote the importance of inheritance and protection, but also master the depth professional knowledge. 2.3
The Virtual Design and Interactive Experience of Crafts
On the basis of virtual reality construction of traditional handicraft related data, such as modeling data: shape contour, size, structure proportion, etc.; and material data: texture, color, luster, etc.; Virtual process design and virtual presentation of works can be available through 3D graphics generation technology and 3D images or virtual environment as well as various stereo display technology and the application of sensor technology. Besides, comparison and selection are made to determine whether the
1300
J. Pu
process can achieve the desired purpose by means of the final generation effect of visualization, and it can greatly saving the design time cycle and process cost by quickly virtual a number of design schemes and composition process. Therefore, it can also play a solid and effective role in the development of traditional handicraft digitalization, taking virtual reality as a medium, integrating or grafting reality into the current VR system, so as to explore the innovation and possibility of contemporary handicraft. With the development of virtual reality technology, more and more attention has been paid to its application in handicraft. For example, jewelry design has been designed, produced and sold in a relatively traditional way. While the jewelry has realized the full 3-d virtual reality technology experience and generated an interactive operation. The AR hardware sensor, human-computer interaction technology and the 3d data of human body are measured to obtain and track human body movements and generated the virtual trial experience. The Virtual experience is realized through the network and VR/AR transmission, while the image is exactly the same as the real product, so the jewelry can be reedited and modified according to the feedback of the virtual trial. 2.4
The Extension of Education Forms and Integration of Resources
Handicraft teaching is also an important part of art education. Most of the virtual reality technologies are mainly applied in medicine, chemical engineering, physics, nature and other disciplines, and the teaching activities are performed by means of simulation exercises, simulation operations and environmental experience. Subsequently, it gradually appeared in the field of design, such as architecture design, industrial design, environmental art design, multimedia digital design and other disciplines, which opened up new perspectives and new thinking of design for these disciplines, and its teaching form also proposed many references and professional explorations for the teaching of arts, crafts and handicrafts. With the gradual popularization and technical advantages of virtual reality technology, the characteristics of flow, interactivity and imagination make the experimental teaching method of handicraft more interesting, interactive, intelligent and personalized, and change the previous model and single teaching method, which can provide more innovative and interactive resource platforms for students. Promoting the new teaching concept will make the training mode change from “teaching oriented” to “learning oriented”, so that to realize the student-centered, meet the needs of talent training, improve the quality of talent training, build a high immersion and interactive teaching mode, create a panoramic experience and conceptual space of “self-regulated learning”, inspire independent and creative thinking to active attempt and exploration.
New Experience of Interaction Between Virtual Reality Technology
1301
3 The Virtual Reality Technology and the Future of Traditional Handicraft Handicraft is an epitome of society, economy and culture, which plays an important role in civilization, technology and production. In the new era, it will inevitably marry with science and technology, and combine with big data, virtual reality technology, information technology, artificial intelligence, etc. to produce more personalized and artistic handicrafts. As a medium for viewing and interaction, the virtual reality will also collide and merge in various fields, imitate things through simulation and interaction with perception, so as to reproduce emotions and real experience. Under such trend, the handicrafts constantly explore and think about the connection points that fit each other, so as to give full play to each others’ advantages and make up for the shortcomings, and also promote the infinite possibilities of human knowledge cognition. However, it is necessary to have a clear cognition facing the penetration and intervention of virtual reality technology in various fields. What is the responsibility and value of handicraft in society and culture? And what is its destination and essential role? These problems will be the challenges and reflections brought by handicraft when facing the development of science and technology. No matter how the technology and handicraft integrate, we should keep and stick to the value of non-substitutability.
4 Conclusion At present, due to the limitations of human understanding of the world and the complexity and infinite variation of the realistic environment, although the technologies of data acquisition, analysis modeling, rendering and sensing interaction have made great progress, there are still many problems to be solved. With the popularization of 5g, virtual reality technology will gradually break through the data transmission and bottleneck, and some important platforms and key technologies of virtual reality in the field of education have been improved. The combined innovation in these fields, with virtual reality technology as the starting point, will promote more different fields in the future, realize cross-border integration and bring broad prospects.
References 1. Li, L.: Virtual reality technology and its application. China Sci. Technol. (3), 30–31 (2019) 2. Qiu, C., Lin, Y.: New craft, new life, new aesthetics. China Culture Dairy, no. 007, 29 December 2019 3. Wang, Z., Wang, Y.: Sidelights of “Renascence of traditional culture: Tsinghua University cultural heritage protection and innovation research achievements exhibition”. Art Des., 36– 43 (2019) 4. Wang, Z.: Cultural heritage protection in new media technology. Art Educ., 8–9 (2018) 5. Lu, L., Man, S.: Application of virtual reality technology in traditional handicrafts protection. Folk Art, 49–53 (2018)
1302
J. Pu
6. Zhang, Z.: The educational application of VR and AR and the prospect of MR. Mod. Educ. Technol. 27, 21–27 (2017) 7. Shen, Y., Lu, X., Zeng, H.: Virtual reality: a new chapter in the development of educational technology: an interview with Professor Zhao Qinping, Academician of Chinese Academy of Engineering. E-Educ. Res., 5–9 (2020) 8. Qiu, C.: The status and future trend of handicraft development. Chin. Handicraft, 122–123 (2019) 9. Liu, M., Zhang, J.: The research on future classroom teaching mode in the view of virtual reality. China Educ. Technol., 30–37 (2018) 10. Zhao, Q.: 10 science and technology issues in virtual reality. Sci. China Inf. Sci. 46(6), 800– 803 (2017)
Image Highlight Elimination Method Based on the Combination of YCbCr Spatial Conversion and Pixel Filling Jiawei He1, Xinke Xu1, Daodang Wang1, Tiantai Guo1, Wei Liu1, Lu Liu1, Jun Zhao1, Ming Kong1(&), Bo Zhang2, and Lihua Lei2 1
College of Metrology and Measurement Engineering, China Jiliang University, Hangzhou 310018, China [email protected] 2 Shanghai Institute of Measurement and Testing Technology, Shanghai 201203, China
Abstract. In optical 3D measurement, the highlight caused by specular reflection often affects the measurement result, and causes great measurement errors or the inability to complete 3D reconstruction of measured objects. This paper adopts a highlight elimination method based on color spatial conversion and pixel filling theory. It first converts the collected image from RGB to YCbCr, so that the image brightness and chroma are separated. Then, it normalizes the brightness and uses the pixel filling theory to weaken the brightness of its highlight position. The experimental results show that the method can effectively eliminate the specular reflection on the surface of the object within a certain range. Keywords: Image processing Machine vision Highlight elimination Color space
1 Introduction In most optical 3D measurements, the measured object is proposed to be a diffuse reflection surface. When an active light hits an object, the direction of the reflection beam mainly depends on the incident direction of the beam and the normal direction of the reflection surface. This will lead to the situation that the reflection light on the surface of the measured image collected at certain angles is too strong and the camera is over-saturated, namely, specular reflection light, which will cause great errors in the measurement results. In recent years, the topic of highlight elimination on object surfaces has become a hot research issue in the field of domestic and foreign machine vision. Research on the highlight elimination method caused by specular reflection is also continuing. In 1985, Shafer first proposed the two-color reflection model [1] which is the beginning of a method to suppress the highlight influence based on color information. According to this model, the information points of the object surface are determined by both diffuse and specular reflection. Among them, diffuse reflection determines the shape © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1303–1309, 2021. https://doi.org/10.1007/978-981-33-4572-0_187
1304
J. He et al.
information of the object itself, while specular reflection represents the color of the light source [2]. Shafer et al., and Y. Sato et al., found that the highlight information produced by the two reflections of an object is T-distributed in RGB space, through which its highlight area can be effectively recovered [3, 4]. Robby T. Tan et al., proposed the theory of reflection component separation under the principle of no color segmentation and high efficiency. Based on this, then they proposed a method to suppress the highlight influence by comparing the chroma of adjacent pixels. This kind of local operation is very useful for processing texture objects with complex multicolor scenes [5]. Shen et al., proposed a highlight elimination method without image segmentation. Based on the analysis of chroma error, this method is mainly realized by separating reflection components [6]. For the surface of metal parts, Wang Zhongren et al., proposed a method of color spatial conversion and polynomial adjustment. This method can effectively suppress the highlight influence on the surface of metal parts on structural light measurement to a certain degree [7]. T. Gevers et al., found several color in-variants [8]. Benveniste and Unsalan did research on the basis of these invariants, and proposed to use these in-variants to solve the problem of responding to the reflection light on the object surface in structural light measurement under different ambient light [9, 10]. Based on Shafer’s two-color reflection model and Robby T. Tan’s reflection component separation theory, Yu Xiaoyang, et al., formed a method combining pixel filling and reflection component separation, with remarkable effect [11]. It can be found from the above research that the image processing and spatial conversion method based on color information and adjacent pixel information have their own advantages and disadvantages. Though the former theory is more mature, the highlight elimination on metal objects is more limited; the color spatial conversion has a wide application and obvious effects, but it will cause the loss of the image surface texture to a certain degree. If the two are combined, it will further suppress the influence of specular reflection light on the following optical 3D measurement. This paper combines the color spatial conversion with the pixel filling based on the adjacent pixel information. It first converts the object image from RGB to YCbCr, and separates brightness from chroma. Then, it adopts pixel filling to fill and replace the highlight area, so as to eliminate the highlight.
2 Method and Principle 2.1
Principle Basis
RGB color space is a color coding mode formed by the superposition of red, green and blue to different degrees. This different superposition of these three colors can encode about 16.7 million colors, which is the most commonly used image coding mode in our life and is widely used. However, since the R, G and B all contain brightness information, the image brightness is composed of these connected three parts, which is difficult to adjust the image brightness. Thus, it is quite complex to eliminate image highlight in the RGB color space. If the image brightness and chroma can be separated, the image highlight will be eliminated quickly and effectively.
Image Highlight Elimination Method Based on the Combination of YCbCr
1305
YCbCr color space is the YUV space commonly used in European television systems [12, 13]. Among them, Y represents brightness, and U and V both represent chroma. The scaled or offset version is a color space in which brightness and chroma are separated and independent from each other. Y represents image brightness, while Cb and Cr represent blue and red chroma components, respectively. YCbCr color space can separate the brightness and chroma of RGB, which is widely used. The conversion formulas of the two color spaces are shown in (1) and (2). 2
3 2 Y 0:257 0:504 4 Cb 5 ¼ 4 0:148 0:291 Cr 0:439 0:368 2
3 2 R 1:164 4 G 5 ¼ 4 1:164 B 1:164
0 0:392 2:017
32 3 2 3 0:098 R 16 0:439 54 G 5 þ 4 128 5 0:071 B 128
ð1Þ
32 3 1:596 Y 16 0:813 54 Cb 128 5 0 Cr 128
ð2Þ
Among them, Y 2 ½16; 235, Cb, Cr 2 ½16; 240. Through the above formulas, the conversion between the RGB and the YCbCr space can be achieved, so that the brightness and the chroma are separated, which is good to operate correspondingly on the image brightness. 2.2
Method
Firstly, the RGB color space of the image is converted to the YCbCr with formula (1). When the brightness value is between 16 and 170, the visual effect of the image is relatively soft, and the image detailed texture is clear, which belongs to the diffuse reflection area [7]. Based on repeated experiments, when the brightness value is greater than 205, because of the influence of ambient light or active light, the problem of oversaturation will occur during camera collection, which will gradually produce highlight on the object surface and affect following measurement. Thus, further analysis is needed for the part of image brightness value greater than 205. By analyzing and arranging adjacent pixels, the pixel filling method can quickly and effectively suppress the highlight influence. The brightness value Y is normalized to map the Y value from ½16; 235 to ½0, 1. In this way, the brightness value matrix can be presented in the form of grayscale, which helps the following analysis and processing, and simplifies the following calculation to make the processing clearer. For the normalized brightness value matrix, the statistical analysis of the highlight and diffuse reflection areas shows that when the pixel value is greater than a certain threshold, the pixel can be regarded as a highlight pixel. However, it should be noted that the pixel values of the highlight areas are different after the normalization of the brightness values of the objects with different shapes, which needs a case-by-case analysis. With the brightness value grayscale image, the highlight area can be judged and marked. Then, the whole matrix is traversed to fill pixels for the marked pixels, namely, the qualified pixels near the marked pixels are used to fill the pixels [11]. For the images used in this experiment, when the pixel value difference between the two
1306
J. He et al.
pixels is less than 0.16, the processing effect is the best. Thus, when the pixel value is greater than the pixel value in the highlight area and the adjacent pixel value is less than or equal to 0.16 from the marked pixel value, it can be set as the qualified replacement point, with the following steps. Supposing that pixel i (x, y) needs to be replaced and filled, first judge the two adjacent pixels i (x − 1, y) and i (x + 1, y) in the X-axis of the pixel, and then judge the adjacent pixels i (x, y − 1) and i (x, y + 1) in the Y-axis of the pixel. If these four pixels do not meet conditions, the other four i (x + 1, y − 1), i (x + 1, y + 1), i (x − 1, y + 1), i (x − 1, y − 1) are judged once. If all the eight pixels still do not meet conditions, we will expand the scope and judge 12 pixels around the area. If there is no matching pixel, the search range is further expanded until the matching pixel and the corresponding pixel value are found. By setting a cycle, the above steps are repeated for each marked highlight pixel until all the highlight pixels are processed. Finally, after the Y value is inversely normalized according to formula (3) (g (Y) is the normalized Y of the pixel, Ymax and Ymin are the maximum and minimum values in the image Y value matrix), the color space is converted into RGB color space with formula (2) to analyze the experimental results and improve the experimental means or algorithm. Y ¼ gðYÞ ðYmax Ymin Þ þ Ymin
ð3Þ
3 Experimental Results and Analysis The first group of experiments is shown in Fig. 1. Figure 1(a) is an image of yellow glossy stones collected by CCD under the natural light. Due to specular reflection, the surface presents stripe and speckled highlight, so the surface detailed features cannot be extracted. By analyzing the color spatial conversion of the image, the effect is the best when the normalized Y value is greater than 0.83. After the sub-area pixel filling, the processing effect is shown in Fig. 1(b). It can be seen that two highlights and several
(a) Before Highlight Processing
(b) After Highlight Processing
Fig. 1. The second group of highlight suppression experiments on the highlight processing effect of glossy stone
Image Highlight Elimination Method Based on the Combination of YCbCr
1307
obvious highlight areas in the image have been eliminated, but there are still a few light and bright spots remaining. Through analysis, the normalized brightness value of this area is between 0.8 and 0.85, and the method of setting the threshold cannot identify it as a highlight pixel. The solution is to set a mask after the first filling, and perform the secondary pixel filling on the relatively hidden highlight areas; Or, by adding highlight recognition conditions with the algorithm, the area where the gray value of the pixel changes rapidly is processed. However, this method has greater limitations and is less effective for objects with rich surface colors. The second group of experiments is shown in Fig. 2. Figure 2(a) is an image of orange collected by CCD under the natural light. Because of the specular reflection of the orange surface, there is an obvious highlight area in the center, so it is impossible to extract its surface features. By analyzing and judging, the pixels greater than 0.85 are regarded as highlight pixels, and the effect after highlight processing is shown in Fig. 2(b). It can be seen that the original highlight area in the image shows a light green color, which presents discontinuous chroma with surrounding pixels. Through measurement, the R value in RGB of this area is distributed in ½190; 210, the B value is ½30; 90, while the R value of surrounding pixels is mostly distributed in ½235; 250, and the B value is ½0; 10. Because of the specular reflection, the orange surface produces strong highlight, and results in the loss of texture and chroma information in this area. Thus, only adjusting the brightness will make the image chroma inconsistent and cause errors, which is also the shortcoming of this method. The solution is to
(a) Before Highlight Processing
(b) After Highlight Processing
(b) After Chroma Processing Fig. 2. Processing effect of orange highlight surface
1308
J. He et al.
reduce the intensity of ambient light or active light source as much as possible during image collection, and remain the surface texture and chroma of the object as much as possible; Besides, there is another solution as shown in Fig. 2(c). After the image color spatial conversion and normalization, the image brightness value and two chroma value are filled with pixels. The method is similar, that is, to eliminate highlight while remaining image texture and chroma information as much as possible.
4 Conclusion and Prospect For the object image with highlight, this paper designs a set of image color spatial conversion and pixel filling algorithm. It can be known from experiments that this method is simple, fast, practical and effective. However, this method also has some shortcomings. Objects with different shapes need to be analyzed and processed, and sometimes need several times; This method is better for flat and smooth objects, but worse for objects with complex shapes. When acquiring images, the light source intensity should be strictly controlled to obtain a more ideal effect. If the intensity is too large, the image chroma also needs to be filled. For the improvement of this method, the method of simultaneous processing of image brightness and image chroma will be improved in the future, and the theory of reflection component separation will be introduced so as to expand its application range.
References 1. Shafer, S.: Using color to separate reflection components. Color Res. Appl. 10(4), 1 (1985) 2. Tan, R.T., Nishino, K., Ikeuchi, K.: Separating reflection components based on chromaticity and noise analysis. IEEE Trans. Pattern Anal. Mach. Intell. 26(10), 1373–1379 (2004) 3. Klinker, G.J., Shafer, S.A., Kanade, T.: The measurement of highlights in color images. Int. J. Comput. Vision 2(1), 7–32 (1988) 4. Sato, Y., Ikeuchi, K.: Temporal-color space analysis of reflection. J. Opt. Soc. Am. A 11 (11), 2990–3002 (1994) 5. Tan, R., Ikeuchi, K.: Separating reflection components of textured surfaces using a single image. PAMI 27(2), 178–193 (2005) 6. Shen, H.-L., Zhang, H.-G., Shao, S.-J., et al.: Chrvmaticity-based separation of reflection components in a single image. Pattern Recogn. 41(8), 2461–2469 (2008) 7. Wang, Z., Quan, Y.: An approach for removing highlight from image of metal parts. Mech. Electron. (10), 7–9 (2008) 8. Gevers, T., Smeulders, A.W.M.: Color-based object recognition. Pattern Recogn. 32(3), 453–464 (1999) 9. Benveniste, R., Unsalan, C.: Single stripe projection based range scanning of shiny objects under ambient light. In: International Symposium on Computer and Information Sciences, pp. 1–6 (2009) 10. Benveniste, R., Unsalan, C.: A color invariant based binary coded structured light range scanner for shiny objects. In: International Conference on Pattern Recognition, pp. 798–801 (2010) 11. Yu, X., Pan, Z., Sun, X., et al.: Highlight suppression method by combining reflection component separation with pixel filling. J. Harbin Univ. Sci. Technol. 22(3), 73–79 (2017)
Image Highlight Elimination Method Based on the Combination of YCbCr
1309
12. Balodi, A., Anand, R.S., Dewal, M.L.: Comparison of color spaces for the severity analysis of mitral regurgitation. Int. J. Inf. Technol. 11(4), 647–651 (2019) 13. Pandey, M.K., Parmar, G., Gupta, R.: Non-blind Arnold scrambled hybrid image watermarking in YCbCr color space. Microsyst. Technol. 25(8), 3071–3081 (2019)
Application of E-Sports Games in Sports Training Xin Li(&) and Xiu Yu Lu Xun Academy of Fine Arts, 39 Jinshi Road, Development Zone, Dalian, Liaoning, China [email protected]
Abstract. As e-sports games are included in the Asian Games and the Chinese e-sports game team has repeatedly achieved good results in the World Series, domestic people have rethought and positioned e-sports games. With the improvement of the level, the number of domestic game population has risen dramatically in recent years, and the prospects are broad. E-sports games have a positive role in improving team coordination awareness, vision, concentration, spatial sensation, and decision-making ability. They are gradually being used in the field of sports training and have achieved good results. Keywords: E-sports games
Sport training Professional courses
E-sports games have evolved from the most original cassette type, disc type, to computer networking, somatosensory, and AR styles. So far, they have continued to expand. The relevant high-level games are frequent. Individual colleges can also set up professional courses. There are a large number of game populations in the segment, and there are many types of games in various categories. The reason behind this blowout development is that people find that e-sports games can promote the improvement of physical related abilities., Scientific guidance can stimulate the potential of e-sports games and make it develop in a direction that is beneficial to people.
1 Positive Effects of E-Sports Games on the Body 1.1
Effect on Physical Ability
The game setting of somatosensory games requires participants to have certain strength, endurance, agility, reaction speed, and coordinated physical basic capabilities. As the difficulty of the game increases, the participants complete the process more difficult. Reflects the current physical condition of the participants. The easy and difficult game settings enable participants to get a full training experience, and recruit more nerve impulse stimulation for the upcoming high-level training, maximize the training of the current body’s various functional levels and stimulate physical potential, So that the physical quality can be improved in the future. The AR game has richer scene settings and a high degree of reduction. Participants devoted themselves to completing high-load game tasks under the immersive feeling, and they have higher requirements for participants’ physical fitness [1]. After networking, it can compete © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1310–1314, 2021. https://doi.org/10.1007/978-981-33-4572-0_188
Application of E-Sports Games in Sports Training
1311
with global online players, which enhances the uncertainty of training. Tolerance to the imbalance of long-term training factors will shorten the response time of participants to the actual game and increase the fun of training. It truly restores the actual on-site factors including noise and other competitions, fills in the deficiencies of traditional training, and greatly improves the enthusiasm of participants and the intensity of the competition. Promote the development and improvement of all aspects of the body. 1.2
Impact on Psychology and Other Abilities
Whether it is a role-playing game, a battle game, a strategy game, etc., there are certain difficulty settings. In the course of the game, participants need to adjust their strategies according to the actual scene, make decisive technical and tactical decisions, meet difficult levels, overcome pressure, be brave to face suffering, and win the final victory. To reduce mistakes and strive for higher scores. Because the game settings are controllable, more specific targeted training modes can be made according to the athlete’s psychological characteristics, fully simulating the actual situation of gains and losses in the actual game, and at the same time setting such as the number of players on the field, the opponent’s ups and downs, etc. [2]. Adjustments can make the training more vivid and infinitely restore the real game situation. This has obvious training significance and effect on the athlete’s psychological resistance and ability to maintain attention and concentration. In the shooting game, participants should eliminate distractions, focus on the game, adjust their horizons at all times, and better complete the competition game. Team battle games require participants to have a good sense of teamwork, the overall situation, the ability to read and direct the game on the spot. Because the scene settings of the game change a lot, the participants are required to have a good comprehensive performance of psychology and other abilities in the game, strong on-thespot ability to achieve good results, and excellent athletes with long training years often have skills or There is a bottleneck state of physical ability, and it is impossible to find a way to break through to improve athletic performance. The physical ability in the game is often constant, and the mental performance of the sports performance is not stable enough. Physical state abilities are more urgent and efficient, and they are also in line with the biological laws of physical function development. They are an effective way to achieve breakthroughs in performance and athletic performance on the field. Through a period of training in e-sports games, participants can improve their abilities.
2 Application of E-Sports Games in Sports Training The typical team competitions represented by football, basketball, and volleyball. The types of competitions on the field vary greatly. The achievement of the results requires the collective decision of the athletes on the court and the coach team off the court. Make timely, flexible and reasonable adjustments to the situation, give play to your personal advantages and ability characteristics, restrict your opponents from playing, choose the technical and tactical route in a timely manner, and seize the opportunity to launch offensive and organizational defense. Both the integrity of the team and the individual Independence, and requires players to have a good vision, a high degree of
1312
X. Li and X. Yu
concentration, organizational ability, pressure resistance, etc. [3]. Select “Live Football”, “FIFA Football”, “King of Glory” and other types of games, can help team members increase communication, mutual trust, through the setting of specific game scenarios, the game pre-match simulation and post-match replay, through the collective Decision-making, find a suitable response plan, improve the reading ability and thinking ability of the game, so that the participants can devote themselves to training and competition, so that athletes can change from training participants to training formulators for better improvement The setting of the training plan diversifies the skills and tactics, enhances the ability to respond to sudden changes in the field situation, the team’s combat ability, entertains the body and mind while increasing the collective sense of honor [4]. Competitive events such as shooting and Go, which require athletes to maintain emotional stability throughout the game. Athletes need to control their breathing smoothly and heartbeat at all times. They are not affected by external factors such as venue and opponent performance. Focus on the heart, focus on the self, calm state of mind, self-entry conditions remain unchanged, the factors that appear in the game are indeed constant, players with little experience in the game are often difficult to achieve efficient self-control. It is easy to lose oneself at a critical stage of the game and fail. If one does not get a proper improvement for a long time, the athlete will fall into the embarrassment of self-suggestion, making it difficult to break through oneself and achieve good results [5]. “Adventure Island”, “Fruit Ninja” and other series of games require participants to maintain a stable state, respond to all challenges in the game at any time, decisively choose the best fatal blow at the right time, without the opportunity to reorganize the attack, the game The difficulty setting in the game mode greatly restores the competition environment and highlights the tense competition atmosphere. The ability to make optimal decisions under high pressure is the only way for athletes to succeed and mature [6]. The traditional training mode is difficult To achieve comprehensive and accurate replay and simulation of the game scene, the e-sports mode makes up for the shortcomings. Long-term exercise can enhance the confidence of the athletes, shorten the reaction to adapt to the game, and avoid the imbalance of emotional state caused by mistakes in the game. Is an effective means to improve athletes’ special quality. Badminton, tennis and other events are characterized by no physical contact during the game, but require athletes to have a good sense of space, have a pre-judgment on the incoming line, optimize the path of attack, so athletes of this type usually have a lot of skills In addition to tactical training, it is necessary to conduct corresponding appearance training [7]. By imagining various scenarios encountered in simulated competitions, actively cope with adverse conditions such as interference factors on the field, refine the training mode, refine the training details, increase training difficulty, and increase athlete response Ability to sustain external disturbance factors and maintain continuous high-energy performance. With the help of AR technology, you can deepen the depth of appearance training, better help athletes form muscle memory and solve problems in specific scenes, improve athletes’ feeling of space, and face difficulties. In the situation of the game, stimulate your potential and achieve excellent results [8–15].
Application of E-Sports Games in Sports Training
1313
3 Conclusion Mankind has a higher pursuit of breaking through the physical limit. The traditional training method is an effective way for athletes to perform basic training, which translates into the development of science and technology. People have also made new explorations and cognition of sports training methods and methods. Enriched the training system, expanded the vision of athletes, and promoted the coaches’ thinking and improvement of the initial knowledge system. From the most original entertainment and leisure methods, simple and intelligent games, e-sports has gradually been developed into sports training with great potential. One of the methods is widely used in the high-level national team training system. The height of the scene is re-used and reused. The error can be set arbitrarily. It is simple, easy to operate, and is not easy to produce fatigue. It has been greatly improved in terms of mental ability and other aspects. Especially when the training conditions are affected by weather, global epidemic and other related factors and cannot fully meet the training, e-sports training methods can highlight their unique characteristics. With the continuous advancement of science and technology, people’s exploration of e-sports games is continuously deepened, and e-sports games will become a powerful auxiliary tool for strategic planning.
References 1. Zhang, B., Liu, S., Miao, S., Huang, S.: Positive effects of action video game experience on visual attention. Chin. J. Clin. Psychol. (2019). (in Chinese) 2. Hou, N.: On the positive effects of video games on middle school students. China New Commun. (2019). (in Chinese) 3. Dowsett, A., Jackson, M.: The effect of violence and competition within video games on aggression. Comput. Hum. Behav. 99, 22–27 (2019) 4. Green, C.S., Bavelier, D.: Action video game modifies visual selective attention. Nature 423 (29), 534–537 (2003) 5. Jin, Z.: Use football video games to promote youth football training. Contemporary Sports Technology (2016). (in Chinese) 6. Toropova, G., Strelnikova, I.: Readiness for action and features of its regulation in e-sports. In: Compilation of Abstracts of the 17th International “Olympic Sports and Mass Sports” Scientific Conference (2013) 7. Lu, S.: Thoughts on video games, e-sports, and modern sports. J. Guangzhou Inst. Phys. Educ. (2020). (in Chinese) 8. Abanazir, C.: Institutionalisation in E-Sports. Sport Ethics Philos. 13(2), 117–131 (2019) 9. Yongming, L., Wang, Y., Haohao, S.: Coordinated development model of E-sports based on three party game. Cluster Comput. 22(2), 4805–4812 (2019) 10. Li, M., Zhou, Y., Xia, L.: Research on the development of e-sports in China from the perspective of the Asian Games. Chinese School Physical Education (Higher Education) (2018). (in Chinese) 11. Shuai, P., Gu, Y., Sun, Z., Wang, Y.: The significance of establishing somatosensory game features on the development of youth sports interest. Contemporary Sports Technology (2020). (in Chinese)
1314
X. Li and X. Yu
12. Yu, B., Chi, J., Jing, W., Zhao, J.: Enlightenment and reference of sports somatosensory games on the development of physical education curriculum resources. In: Compilation of Abstracts of the 11th National Sports Science Conference (2019). (in Chinese) 13. Kong, X., Guo, J.: Rather than confrontation, it is better to embrace——on the integrated development of sports and e-sports. In: Compilation of Abstracts of the 11th National Sports Science Conference (2019). (in Chinese) 14. Li, L., Zhang, X.: The spread and reconstruction of sports culture by sports electronic games ——taking “NBA 2K19” game as an example. In: Compilation of Abstracts of the 11th National Sports Science Conference (2019). (in Chinese) 15. Guan, S., Fang, Y., Wang, X.: The effect of electronic games on children’s motivation and sports games intervention——taking Manchu traditional sports games as an example. J. Harbin Inst. Phys. Educ. (2017). (in Chinese)
The Research and Governance of Ethical Disorder in Cyberspace Dongyang Chen(&) and Hongyu Wang Wuhan University of Science and Technology, Wuhan 430000, China [email protected]
Abstract. The aim of considering the issue of order from ethics point of view is to standardize social intercourse and refine network environment. Since the characters of cyberspace has no authority, no center and the character of virtuality; people get greater freedom in acting and expressing, the ethical disorder issue in cyberspace has become more and more serious. Its specific phenomena are the mental anxiety caused by the side-effect of internet technology, the dissimilation of ethical relations due to virtual communication and the ruin of ethical consensus raising by irrational public opinion. Therefore, the basic movement is to attach more importance to the moral reflection for internet technology, then to strengthen the construction of cyber ethics regulatory system and improve the capacity of self-purification for online opinions. Finally, the social capital could grow actively in cyberspace. Keywords: Cyberspace
Ethical disorder Network governance
1 Introduction The cyberspace introduced by internet has become a new area for social life. In this public network environment, with the features of openness, virtuality, independence and multi-direction, every participant is a free and equal individual. People seem to be controlling everything behind the screen, but the expressions are just all kinds of communication marks. The following problem is that the operators are been hidden behind the screen, and the initially ethical situation is gradually broken. It concerns the public that such kind of ethics disorder in network virtual space may affect the normal life. Order is the dimension of civilization, the society needs it, so does the cyberspace [1]. The benign order is the necessity to maintain a healthy network ecology and the essential foundation of governing cyberspace. As a structural existence of moral principles, network ethical order has the crucial function to regulate virtual sociality. The network ethical order is criterion system for objective relationship, which is based on people’s moral practices. It is not only agreed by most netizens, but also accumulated and formed into kind of regulations, which is similar to institution. At the same time, the ethical concepts, which are carried by it, stand as spiritually cultural ingredients; they can be transmitted and go into people’s minds to improve the subject’s sense of responsibility, justice and initiative, which produce restrain effects. As for the operational mechanism, the ethical order in cyberspace is to adjust interpersonal conflicts through interests, thus healthy network social regulations come into being to © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1315–1320, 2021. https://doi.org/10.1007/978-981-33-4572-0_189
1316
D. Chen and H. Wang
meet citizens’ behavior expectation. In other words, the purpose of constructing network ethical order is to make sure the harmony and stability of virtual ethical relations. And ethical relation is the interpersonal and objective spiritual relation based on certain interest foundation; Its purpose is to make sure that the protection and proving for the common interests of a group is reasonable [2].
2 The Features of Cyberspace As human’s secondary living space, network society possesses its own features. We are able to understand it through technical and social respects [3]. Technically, internet is the synthesizer of modern high-tech, which synthesize current telecommunication, new materials, and AI technologies. And each node of the internet is every single computer, which forms Distributed Computer Network. Because of these, the information is sent in high speed, complicated directions and can be automatically stored and duplicated. The more important point is that owing to the above-mentioned features, cyberspace has the distinct sociality compared with the actual society we are living now. 2.1
No Authority, No Center
There are all sorts of authorities and centers in the real society, and we are often controlled by them, which is also the requirement for living and developing. Meanwhile, the authority and center are no longer existing in cyberspace. The identity is covered with digital management, thus, everyone could be the authority, which dissolves the authority by coincidence [4]. The disappear of authority means the disappear of center. So, there is no obedience and dominance in network society. It is because of these that some will call it “The anarchy with no governors, no laws and no army”. 2.2
Virtuality
The roles played in cyberspace need not to be given the actual rights that they should have in reality, and need not to bear the obligations correspondingly. As we all know, every individual plays more than one characters in reality, and each role has the specific rights and obligations [5], which consist of the essence of a real society. However, virtuality makes the obligations unnecessary in cyberspace. 2.3
Greater Independence for Behaviors
In network society, socializing has breached the limits for time and space, which makes it the independently personal actions. In this world, people hold the future in their own hands and can change, create or join another satisfying society when they feel bad about the current one. They literally possess the free will.
The Research and Governance of Ethical Disorder in Cyberspace
2.4
1317
Greater Openness
Network society is an open place. Every space, culture, rule or organization seems to be equally open to everyone whether you are familiar with it or not. The information can be delivered quickly from one place to another. Different religious beliefs, values, customs and ways of living could be shared, communicated, understood via the internet [6].
3 The Performances of the Ethical Disorder in Cyberspace 3.1
Ethical Panic Caused by the Side-Effect of Internet Technology
The beginning of many technologies are often exciting, but they would spark heated debates and touch many ethical problems during the process of spreading. The invention and development of internet technology has changed our life deeply, but the construction of network morals is left behind, which makes the technology out of control. Some users smash the bottom line and make illegal profits wantonly with a virtually covered identity. For instance, operating personal computers by program, making chaos with computer virus, marketing of pornographic material, violating privacy and new intellectual property by internet bugs, etc. These behaviors severely stir ethical regulations and increase public insecurity, which bring anxiety, panic and distrust to variable degree. 3.2
The Dissimilation of Ethical Relations Due to Virtuality
The virtual space made up of digital signs is a new area where netizens are connecting with others via screens, not live people. The virtual cyberspace indeed enrich the methods of communication, but it also make it easier to confuse social identity, play a wrong role and misbehave. The following issue is that unethical phenomena are easier to happen without knowing the real identity. Even moral principles will be challenged when it comes to virtual space. 3.3
Irrational Public Opinion in Cyberspace Has Led to a Tearing of the Ethical Consensus
Network public opinion is the external representation of individual or group interests and emotions, and reflects the realistic demands of the subject. It is precisely because of the different interests and emotional appeals, even for the same topic, netizens will also have conflicts and contradictions. Therefore, a seemingly common public issue on the Internet may also arouse discussion among different social strata and interest groups, resulting in ethical misconduct [7]. Especially under the background of “Internet age”, one-sided view of public opinion expression, extreme words, emotional speech are wantonly spread. However, irrational public opinion tends to lead to the accumulation of resentment. Some netizens often go to extremes when refuting others’ opinions and expressing their own emotions. They do not abide by the ethical norms, violate the public order and good customs, and even arbitrarily post negative labels, which leads to
1318
D. Chen and H. Wang
the tearing of the ethical consensus in the field of online public opinion and disorder of communication.
4 Governance of Ethical Disorder in Cyberspace 4.1
Attaches Great Importance to the Moral Reflection of Network Technology
Network technology is not unrelated to ethics, on the contrary, it has a profound impact on the moral ecology. Although the Internet is only a platform or an intermediary for people to communicate and talk, it has deeply penetrated into the daily life of the public and is increasingly impacting the bottom line of morality. Therefore, it is necessary to conduct moral reflection on the development of network technology, avoid blindly believing in “technological determinism” and try to solve all problems by relying on technology [8]. In recent years, the emergence of computer viruses, pornography, personal information leakage, frequent network violence and other problems in the cyberspace is an important reason for people’s blind pursuit of Internet technology, which leads to the extreme expansion of instrumental rationality, resulting in the alienation of human nature. Therefore, we should not only promote the development of network technology, but also carefully consider the potential ethical risks arising from the development, management and application of such technology, and take the way of technological restriction to guide people to establish correct values, so as to make the relationship between people in the virtual society present a harmonious ethical state. 4.2
Strengthen the Construction of Regulatory Mechanisms for Network Ethics
There are many moral disorder behaviors in cyberspace. It is often the result of the imperfect mechanism of ethical regulation. Some people take advantage of the hidden characteristics of virtual space and make profits by irrational and illegitimate means under the motivation of pursuing profits, which will cause the problem of “bad money driving out good money”. To promote the governance of cyberspace, we need to establish a reasonable and effective ethical rule mechanism. First, promote the legal construction of network ethics. Network space order is a kind of inner consciousness of order, but only rely on the subject of moral accomplishment, self-supervision quality, justice, conscience is not enough, it is also need to network ethical principles to rise to the legal level. The second is to improve the moral incentive mechanism of network behavior, and promote good and suppress evil through rewards and punishments. The positive motivation of network behavior is mainly to reward, commend and reward law-abiding, honest and responsible subjects, so as to encourage and guide people to conduct network communication in a reasonable and proper way. The negative incentive of network behavior is mainly to punish and derogate the behaviors or phenomena such as moral misconduct, illegal crime, etc., and to urge people to maintain a good network ethical order by means of reverse measures. Third, strengthen
The Research and Governance of Ethical Disorder in Cyberspace
1319
the overall linkage mechanism in the network field, and limit the scope of market activities of business entities with “moral stains” [9]. 4.3
Improve the Self-purification Ability of Network Public Opinion
One of the important reasons for the flood of irrational opinions in cyberspace is that the voice of reason is silent in the noisy noise. In order to avoid damaging their own interests or worrying themselves into the vortex of public opinion, some reasonable opinionators may choose to be silent when facing the provocation of public opinion, do not take the initiative to speak, and no longer refute after being “besieged” by public opinion. In this way, some negative opinions will occupy the space of public discussion, which will lead to the spread of irrational voices, the weakening of rational speech, and the loss of vocal space, thus falling into the trap of silence [10]. Therefore, in order to deal with the problem of ethics disorder in the field of network public opinion, it is necessary to give play to the self-balancing role of public opinion, encourage the silent rational force to speak out actively, publicize the positive energy and main melody of society, and promote the space of network public opinion to follow the benign ecological logic of ethics, and gradually form a pattern of multiple checks and balances and complementation. In particular, groups or platforms should be used to guide people to promote good and suppress evil through the publication and flow of real information. In other words, the construction of a new order of ethics in cyberspace should improve the self-balancing ability of the ecological field of network public opinion, promote the silent ordinary netizens to become the backbone of the “net net”, and make the public opinion in cyberspace in a relatively rational and mild state. 4.4
Actively Foster Social Capital in Cyberspace
The effective governance of network virtual space needs to enhance the stickiness of interpersonal communication, and there should be “a set of informal values and norms that can be shared by all members and form cooperation”, which is the cornerstone of the construction of ethical order in cyberspace. A good network ethical order is derived from people consciously abiding by the mainstream values, recognizing the basic norms such as security, respect, integrity, fairness, justice and cooperation from the heart, so as to build consensus, build and share [11]. In virtual network space, exchanges between people broke through the limit of time and space, the traditional ethical relationship in our country gradually expand or shift, form a new ethical relationship, this needs to adapt to The Times, vigorously develop social network capital, advocating equal participation, honest and trustworthy, contracts, the rule of law, fairness and justice, and so on, We will vigorously promote socialist core values and abide by the convention on cyber civilization, thus laying the foundation for building an ethical order on the Internet.
1320
D. Chen and H. Wang
References 1. Ren, X.: Cyber society also needs the “beauty of order”. Guangming Daily, 20 August 2015 2. Zhou, H., Yu, Y.: The rationality of the ethical order. Acad. BBS (06), 56–59 (2003) 3. Wu, J., Zuo, G.: Study on the dual effects of virtual practice on subject development. J. Xinxiang Coll. 37(01), 7–11 (2020) 4. Wang, Y., Lu, Y.: The dilemma of mainstream ideology identification in cyberspace and its path innovation. Theor. Explor. (03), 49–54 (2019) 5. Liu, X.D.: On the double influence of network communication on social ethics and morality. J. Hunan Univ. (Soc. Sci.) 23, 125–130 (2009) 6. Ma, J.: Research on moral hazard caused by network group polarization and its prevention and control. Hangzhou Dianzi University of Science and Technology (2019) 7. Zhang, Y.: On the governance of network ideology in the new era. Mod. Trade Ind. 41(15), 99–100 (2020) 8. Li, Y.: Strengthening the awareness of “network community” boosts the process of network social governance. China Social Science Daily, 06 May 2020 9. Zhao, L.: The ethical order construction of cyberspace governance. Soc. Chin. Charact. (03), 85–89 (2018) 10. Zou, X.: Discursive disorder in China’s cyber political space and its causes. Leadersh. Sci. Forum (05), 16–26 (2016) 11. Zhao, L.: The disorder crisis of network discourse in virtual political field and its solution. Theory Trib. (03), 14–16 (2014)
Big Data Helps the Healthy Development of Rural Electronic Commerce Dongyang Chen(&) and Yihe Liu Wuhan University of Science and Technology, Wuhan 430000, China [email protected]
Abstract. With the expand of science and technology, the scale and frequency of internet use are boosting gradually, and the expand of the electronic commerce in the country is rising. As the internet, Big Data has been a crucial resource for the healthy development of the electronic commerce in the country. By studying recent situation and prospect of the development of the electronic commerce in the country, this paper suggests ways to promote the development of the electronic commerce under the background of Big Data: enhance endogenous power of farmers, increase the strength of government guidance, and accumulate a sustainable talent team. These measures are crucial to the development of rural e-commerce in China. Keywords: Big data
Electronic commerce Healthy development
1 Introduction China is a agricultural nation. The issue of agriculture, rural areas and farmers is not only a basis issue that needs to be reconsidered, but also a strategic issue that needs serious study. In order to finish the construction of the harmonious society in all aspects in China’s rural areas, and so do the difficulties. How to improve the high quality of electronic commerce economic development in China’s rural areas is a major issue in urgent need of research. Recently, with the rapid expand of internet, the electronic commerce in the country has shown a prosperous situation and increasingly become a crucial means for farmers to increase their income and improve their quality of life. The Big Data is a derivatives of the development of the internet, and its value and significance should not be underestimated. Some countries have positioned Big Data as a strategic resource. Applying Big Data to e-commerce is an irresistible historical trend and an important means to realize data value. Through the research and use of the big data, it is indicated that a right direction, at the same time, through the interpretation and application of large data effectively to develop new products, mining potential customers, this is vital to expand the electronic commerce in the country in China.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1321–1326, 2021. https://doi.org/10.1007/978-981-33-4572-0_190
1322
D. Chen and Y. Liu
2 Current Situation and Prospect of Rural E-Commerce in China 2.1
Current Situation of Rural E-Commerce in China
The expand of rural electronic commerce in China has shown an unprecedented explosive growth in both scale and speed. Rural electronic commerce has become a vital ways to transform the mode of agricultural development and promote the development of primary, secondary and tertiary industries. It has also become an important vehicle for the implementation of the rural revitalization strategy and targeted poverty alleviation and poverty alleviation. Description of Big Data, the industry does not have a unified view, it can be said that there is a wide range of opinions. However, for the typical characteristics of Big Data, the academic community generally agrees with the view proposed by Brian Hopkins and Boris that Big Data has four features: mass, diversity, high speed and variability. Big Data has gradually become a vital factor with fundamental significance [1]. It is no exaggeration to say that the analysis and use of big data will become the key to the healthy, high-speed and sustainable development of rural electronic commerce in China. China’s rural electronic commerce development is very rapid, for validity, electronic commerce has basically infiltrated people’s life in every field and every corner. In terms of the number and scale, with the popularization of internet, the number of users engaged in electronic commerce and electronic commerce in rural China have increased sharply, accounting for more than half of the rural internet users, and will continue to rise. With the increasing number of people involved in electronic commerce and using electronic commerce, e-commerce has become an important way to improve the quality and efficiency, improve rural appearance, optimize the environment, increase farmers’ income and improve their quality [2]; From the regional distribution, the development of electronic commerce in country areas of central and western China is growing rapidly and with a strong momentum. Due to the relatively developed economy in the eastern region, the vast rural areas have been exposed to the internet and electronic commerce for a relatively early time, and the application of electronic commerce has also started relatively early. With the accelerated pace of electronic and informational development, farmers in the western region are gradually accepting electronic commerce and gradually exploring a beneficial way to realize agricultural transformation and development by means of electronic commerce. The vast rural agricultural products in the western region have obvious characteristics and have corresponding comparative differences and advantages. The use of electronic commerce to promote local agricultural products has not only received the support of relevant national policies, but also received the general attention and promotion of local government. Therefore, rural areas in the central and western regions not only keep up with the pace of the eastern regions in the application of electronic commerce, and even some rural areas with featured agricultural products have obtained the corresponding advantages of the latecomers in the development of electronic commerce; From the perspective of product category, at beginning of the development of electronic commerce, the products transacted through internet are mainly industrial products. Many industrial products flow from cities to rural areas through electronic commerce channels. However, with the popularization and
Big Data Helps the Healthy Development of Rural Electronic Commerce
1323
development of the internet, agricultural products have increasingly become the bulk commodities to be traded through electronic commerce. The characteristic fresh agricultural products in the vast rural areas have been flowing to every household in the city through the internet channel. Therefore, the primary, secondary and tertiary industries in the vast country areas have achieved a deep integration through the development of electronic commerce. Not only has the agricultural transformation and development achieved an unprecedented promotion, but the vast number of farmers have also obtained the opportunity to create businesses and find jobs at home through the use of electronic commerce. Their incomes have been constantly improved and their quality and ability have been continuously improved. 2.2
The Prospect of Rural E-Commerce in China
The rapid development of electronic commerce has injected new blood into the quality improvement, conversion and upgraded of China’s rural economy, and has become a powerful driving force for the realization of the rural revitalization strategy and innovative solutions to the problems of “agriculture, rural areas and farmers”. With the policy guidance of the state for the development of electronic commerce and the promotion of electronic commerce prospects by farmers, we can say without doubt that e-commerce will become an indispensable way to healthy and continuous development of rural economy in the future. The role of electronic commerce in the transformation and upgrading of agriculture and the integrated development of primary, secondary and tertiary industries will be gradually enhanced with the continuous use of Big Data [3]. With the improvement and promotion of rural infrastructure, the role of electronic commerce in improving the built of beautiful countryside will be gradually enhanced with the strong layout of electronic commerce in rural areas. The effect of electronic commerce on improvement of farmers’ income and quality will be enhanced with the expansion of online trading volume. The popularization, development and healthy use of electronic commerce are not only the inevitable trend of the development and the reform of science and technology in the world, but also the inevitable choice of China in the process of promoting the strategy of revitalizing rural areas and making up for the weaknesses in the development of “agriculture, rural areas and farmers”. The application of Big Data will make the development momentum of rural electronic commerce stronger, more obvious validity and broader prospects.
3 The Difficulties Faced by the Development of Rural E-Commerce in China Under the Background of Big Data At present, the scale of the development of rural electronic commerce is increasingly larger, whether electricity or electronic commerce business and general electronic commerce users in continuous economic benefits, also look forward to Big Data and the analysis of effective and efficient use of the convenience of contact, and then expand the room for the development of electronic commerce, further defined the direction of electronic commerce development and sustainability. However, Big Data is not easy to get, nor can it be applied simply through analysis. While it provides
1324
D. Chen and Y. Liu
convenience for the development of electronic commerce, it also brings some potential or substantive challenges to the development of electronic commerce [4]. 3.1
Data Security and Confidentiality Hinder the Development of E-Commerce
Data security and confidentiality are first obstacles to the development of electronic commerce in the country. Traditional business transactions take place between the two parties based on a realistic scenario [5]. There are few intermediate links in the bill documents or the fund transactions, and the probability of theft and disclosure is relatively small. The security and confidentiality of data have to be paid enough attention to in the transaction base virtual scene under the e-commerce context. The practitioners and users of rural e-commerce in China are basically deficient in their own professional knowledge structure. Therefore, professional training must be conducted on the security and confidentiality of data, and sufficient attention must be paid to its prevention. 3.2
Data Mining and Analysis Limit the Development of E-Commerce
The mining and analysis ability of massive data is the second key point for the healthy development of electronic commerce in the country to realize the breakthrough of connotation and high quality. The value of data lies in careful interpretation and efficient use after selection. If it is not possible to analyze the data obtained and apply it efficiently on the basis of analysis, then the value of the data will be difficult to present. Practitioners and users of rural electronic commerce in China also include some small and medium-sized electronic commerce enterprises that do not have the ability and knowledge to analyze and process Big Data. The limited data they obtain are only fragmented and do not have the micro-value data that can guide rural electronic commerce to achieve leapfrog, forward-looking and strategic development [6]. The enhancement of the analysis and application ability of large data can not only make the corresponding marketing plan in a targeted way, but also provide scientific basis and guidance for the healthy and sustainable development of rural electronic commerce in China. Therefore, it is extremely important to cooperate with well-known electronic commerce companies and guide local governments. 3.3
The Lack of Experienced Practitioners is Detrimental to the Development of Rural E-Commerce
The shortage of qualified personnel with technical ability and experience of risk prevention is a short board of talents for the healthy development of rural electronic commerce. “Skill industry has specialized”, the fit and continuous development of electronic commerce must have the corresponding talent guarantee and support [7]. Without the training and reserve of professional talents, it is unthinkable for electronic commerce to achieve leapfrog high-quality development. Most practitioners and users of rural electronic commerce in China do not have the corresponding professional knowledge reserve, which limits the effective development of electronic commerce in
Big Data Helps the Healthy Development of Rural Electronic Commerce
1325
some places and the effective promotion of rural revitalization strategy. Therefore, it is imperative to build a model of talents with both the introduction of talents and the training of local talents.
4 Construct the Healthy Development Path of Rural E-Commerce Under the Guidance of Big Data The fit development of electronic commerce in China is not only a essential choice to promote the conversion and upgrading of agriculture, but also a necessary starting point to effectively improve the appearance of bright villages, increase farmers’ income, increase the sense of reality in the countryside and promote the rural revitalization strategy. The key to the healthy development of electronic commerce under the guidance of Big Data is to enhance the internal motivation of farmers, increase the government’s guidance, and accumulate a sustainable talent team. 4.1
Improve the Endogenous Driving Force of Farmers
It is the basis and guarantee of realizing healthy development of rural electronic commerce to promote internal driving force of farmers. Under the background of Big Data, the majority of farmers expect to achieve the leap-forward development of agricultural product production and sales through electronic commerce, and meanwhile obtain the corresponding rich economic benefits to improve the quality of life, and are willing to pay their own efforts and actions, which are their internal driving forces. Without the internal motivation of expecting development and making efforts, it is difficult to obtain the sustainable motivation of e-commerce development simply depending on the external impetus [8]. Therefore, it is an indispensable part to take multiple measures to enhance the endogenous driving force of farmers, so that they can combine their own development with local development and national development, and give full play to their own enthusiasm and initiative. 4.2
Increase Government Policy Guidance and Business Guidance
The fit and continuous development of rural electronic commerce is inseparable from the government’s policy guidance and business guidance. Local governments need to adhere to the big data thinking, fully integrate the high-quality development of agriculture with revitalization strategy, and fully combine it with the enhancement of farmers’ sense of reality [9]. This not only needs the government to give the development of electronic commerce in the country with policy of the corresponding share of guidance, but also in the process management and evaluation of the results of the development of rural electronic commerce for effective guidance. This guidance through the effective participation of the government is the solid backing in order to make sure the fit and sustainable development of rural electronic commerce in China.
1326
4.3
D. Chen and Y. Liu
Reserve a Sustainable Team of Professionals
The insufficient of electronic commerce talents is the bottleneck and weakness of the efficient development of electronic commerce in country [10]. The building of the talent team is related to the actual effect of the fit and continuous development of rural electronic commerce and the long-term interests of farmers. Only a reserve of highlevel professionals can provide strong intellectual support with healthy development of electronic commerce. Therefore, through the introduction of foreign talents and the cultivation of local talents organic combination of the efficient talent team construction mechanism to help the fit and continuous development of rural electronic commerce in China is a long-term basic subject, we must pay enough attention to.
References 1. Liu, Q.: Research on the development of rural E-Commerce in Guizhou province under the background of big data. Electron. Commer. (07), 11–12 (2018) 2. Zhou, Y.: Research on e-commerce platform model of agricultural products in the era of big data. Sci. Technol. Inf. (25), 15–17 (2016) 3. Zhao, L., Jiangchong, C.: Rural e-commerce development model and operation system construction. Agric. Econ. (08), 117–119 (2017) 4. Zhang, Z.: The cultivation strategy of compound logistics talents under the background of ecommerce. Mod. Trade Ind. 41(18), 20–21 (2020) 5. Yuan, L.: Research on the development strategy of rural e-commerce from the perspective of big data. Comput. Prod. Circ. (07), 142 (2020) 6. Yang, S.: The development path of e-commerce of featured agricultural products under the background of big data. Electron. Commer. (05), 26–27 (2020) 7. Zhanglingling.: The combination of E-Commerce and big data. New Econ. (02), 147–149 (2020) 8. Shi, L.: The application of big data technology in the field of e-commerce. Electron. World (08), 171–172 (2020) 9. Deng, W.: Countermeasure research on E-Commerce service model in the age of big data. Business (12), 148–149 (2020) 10. Hon, C.: Research on the development of e-commerce industry from the perspective of big data. Value Eng. 39(11), 231–232 (2020)
Design of Optical Measurement Simulation Training System in Shooting Range Based on DoDAF Xianyu Ma1(&), Tao Wang2, Xin Guan2, Lu Zhou3, and Yan Wang1 1
3
91913 Troops, Dalian 116041, Liaoning, China [email protected] 2 Naval Aviation University, Yantai 264001, Shandong, China 92199 Troops, Ting, Tsingtao, Qingdao 266000, Shandong, China
Abstract. Aiming at the problems of complex organizational structure, large number of participants and wide distribution of equipment involved in the shooting range optical measurement, the structure framework of the simulation training system is proposed based on the DoDAF architecture design concept and the requirement analysis of the simulation training system. The TD-CAP software is used to describe the operational view, system view model and system structure model of the system, and a more comprehensive and intuitive top-level conceptual framework of the shooting range optical measurement simulation training system is given. Through the analysis of architecture verification and evaluation, the integrity and consistency of the architecture are verified. It can provide ideas and reference for the construction of simulation training system and training organization. Keywords: Dodaf Shooting range training System design
Optical measurement Simulation
1 Introduction Optical measurement has always been an important high-precision measurement method in the range weapon experiment. Its main measurement equipment includes large vehicle photoelectric theodolite, shipborne/airborne optical measurement equipment and camera UAV. At the same time, the optical measurement has the advantages of not being affected by “black barrier” and ground clutter interference [1–5]. In practice, the shooting range usually uses more than two theodolites and other optical measuring equipment arranged on different stations to track and measure the target, and then obtains the external ballistic and attitude parameters of the target through the mode of multi station rendezvous. In recent years, with the continuous improvement of the informatization level of the shooting range equipment, the increasing number and model of the test equipment, the test personnel and the organization, etc., higher requirements are put forward for the optical measurement, followed by the training methods and means are single and old-fashioned, which is difficult to make the personnel and equipment in the whole optical measurement system get close to the actual combat effect training [6–8]. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1327–1333, 2021. https://doi.org/10.1007/978-981-33-4572-0_191
1328
X. Ma et al.
Based on the Department of defense architecture framework (DoDAF) Standard Version 2.0, a simulation training system for optical measurement in shooting range is designed by using td-cap architecture design software. Through the structure description model of DoDAF simulation training system, we can give the top-level and comprehensive description of the requirements of the optical measurement equipment in the shooting range, which has certain guiding significance for the construction of the training system.
2 The Logical Architecture of Intelligent Equipment Support System The construction of shooting range optical measurement simulation training system, first of all, is to analyze and build the requirements of measurement tasks, description models and mapping correlation matrix from different perspectives of optical Step 1 Task requirements analysis. By analyzing the mission and task of the simulation training system of the shooting range optical measurement, the implementation environment of the training system and the main problems to be solved are defined, which lays the foundation for the design of the simulation training system of the shooting range optical measurement.
Step 4 System perspective architecture description. Build SV-5 test activity system function tracking matrix to determine the mapping relationship between OV-5 test activity model and system function; build SV-2 and SV-4 models to accurately describe the connection between the systems of the optical measurement simulation training system of the shooting range. Build SV-1 model, analyze the resource structure flow of operation and system from the top level.
Step 2 Full view and capability view architecture description. Build the AV-1 and AV-2 models of the full view of the shooting range optical measurement simulation training system, define the overall scope and background information of the description of the system structure; build the CV-1, CV-2, CV-3, CV-4 and other capability perspective models, which will help architecture builders reduce the risk of system construction and provide visualization means for system capacity building.
Step 5 Build the association analysis model. Through the construction of OV-3 model, the "demand line" is used to connect all participating organizations and personnel with the test resource flow. Through CV-5 capability organization mapping model and CV-6 capability activity mapping model, the dynamic relationship between the relevant OV perspective and CV perspective model is established, which provides a dynamic mapping environment for the implementation of the executable model of the shooting range optical measurement simulation training system structure.
Step 3 Operational perspective architecture description. Based on the analysis of mission requirements and the related models of full view and capability view, the operational view description models such as OV-1, OV-2, OV-4, OV-5 and OV-6 are constructed. The definition of capability view and various capabilities of the simulation and training system of optical measurement in shooting range are analyzed, and the interaction information of test elements and resource flow of the simulation and training system of optical measurement in shooting range is described, which provides a reference for system construction For technical support.
Step 6 Comprehensive evaluation and optimization of architecture. The integrity, consistency and rationality of the model are verified by the methods such as the verification and evaluation function of the architecture development software and the analysis of the executable model. The evaluation methods such as the value center method, the rimer system effectiveness evaluation method, the simulation model and the architecture compromise analysis method are used to comprehensively evaluate the architecture and the system effectiveness and continuously improve and optimize them to meet the needs of users Please.
Fig. 1. Construction steps of simulation training system structure model
Design of Optical Measurement Simulation Training System
1329
measurement equipment system, then to verify, evaluate and optimize the built architecture, and then to repeat and gradually improve, and finally to complete the overall construction of the architecture [9–13] (Fig. 1).
3 Modeling of Equipment Architecture for Simulation Training System of Shooting Range Optical Measurement 3.1
Full View Model Construction
Build the profile information model AV-1, describe the basic profile information of the structural design of the optical measurement simulation training system of the range, and reflect the body model in the form of table, as shown in Table 1. Table 1. General information of shooting range optical measurement simulation training system AV-1 Number 1
Outline Background
2
Objective
3
Restrictions
4
Model Selection Conclusion
5
3.2
Concrete Content For the test missile launched in simulated sea direction, the test director group commands all optical measurement units to cooperate with each other for optical measurement and observation through communication network. The main observation object of the system is the test missile launched at sea The simulated flight trajectory, attitude, speed and other parameters of the missile are obtained, and the test personnel and equipment are trained The construction process must comply with the relevant rules, regulations and operation procedures of the test, comply with the limitations of the natural environment of the shooting range, and refer to the existing personnel and equipment conditions The total number of architecture perspective models is 8 The design scheme is basically feasible through self-assessment and expert review
Model Construction of Operational Perspective
1) Advanced test concept diagram model (OV-1) According to the requirement analysis of the test task of the optical measurement simulation training system in the shooting range, the advanced combat concept model OV-1 is defined [14]. This paper describes the main action modes of the simulation training system of the optical measurement in the shooting range to complete the test tasks such as detection, tracking and observation of the test targets, including the test units and test targets in the system, as well as the information and data interaction between them, as shown in Fig. 2.
1330
X. Ma et al.
Fig. 2. Advanced test concept diagram of range optical measurement system (OV-1)
2) Test activity model (OV-5b) The test activity model OV-5b of the shooting range optical measurement simulation training system is a test activity model sub diagram constructed according to the OV-1 model, as shown in Fig. 3. The OV-5b model is used to describe the input/output flow between test activities and activities, and is complementary to the OV-2 model. 3) Test resource flow model (OV-2) The main purpose of OV-2 model is to determine the information flow of the simulation training system test of the range optical measurement. It can also define the concept of the range optical measurement test, describe the requirements of the range optical measurement capability in detail, make the test plan and allocate the test activities to resources. 4) Organizational relationship model (OV-4) Organization relationship model OV-4 is mainly based on job description of organization structure and command relationship. The main purpose of OV-4 is to analyze the organizational structure of the optical measurement simulation training system in the shooting range, define the positions of relevant test personnel and analyze the organizational process of optical measurement test activities. 5) Test event tracking model (OV-6c)
Design of Optical Measurement Simulation Training System Simulated test missile target Radar detection
and tracking
1331
Detection and tracking instruction Sea based joint detection and tracking information Shore based joint detection and tracking information
Management and command control of experimental director group
Optical measurement for each platform
Optical measurement results and scoring of each platform
Evaluation of optical measurement effect
Optical measurement task planning Detection target information data Optical measurement test command
Measurement operation, natural conditions and other status information of each platform
Optical measurement result information
Feedback information of optical measurement evaluation
Fig. 3. Activity test model of range optical measurement system - subgraph (OV-5b)
Ov-6c is a further refinement of OV-5 model and an event analysis of the test process. The OV-6c model mainly divides the tasks of the test activities into different scene level contents [15], and constructs the model for each scene, completely describes the interaction relationship of the optical test activities with timing and time attribute between each scene, and realizes the dynamic behavior data of the system describing “successfully executing a optical measurement task”. 3.3
System Perspective Model Construction
1) System interface model (SV-1) The Sv-1 model is based on the OV-1 model, which is constructed after analyzing the composition and interface of shooting range optical measurement simulation training system. It is mainly used to determine and mark the interaction relationship and resource flow between the interface between each system and its subsystem, such as detection and tracking, command and control, optical measurement platform, and test resources in the simulation training system of optical measurement in the shooting range. 2) System function decomposition model (SV-4a) SV-4a corresponds to OV-5b model, which mainly describes the system function of the simulation training system for the optical measurement of the range, and uses the function hierarchical classification method to decompose the function to the appropriate
1332
X. Ma et al.
particle size [16]. The functions of the island reef air defense and antimissile electronic countermeasure system are divided into five functions, and each function is further divided into sub functions according to the particle size requirements of different test models. 3.4
Architecture Validation Assessment Analysis
The verification and evaluation analysis of shooting range optical measurement simulation training system structure is based on the analysis of OV-1, SV-1 architecture overall description model, selecting physical architecture model OV-5b and logical architecture model OV-2, transforming model data and generating executable model for verification [17]. Finally, the system architecture development software is run to generate an example of the test results. The test data information of the simulation training system architecture of the range optical measurement is consistent and complete with the system data information.
4 Conclusion In this paper, based on the test task requirements of the simulation and training system of shooting range optical measurement, based on DoDAF 2.0 architecture framework standard, the architecture model of the simulation and training system of shooting range optical measurement is constructed by using TD-CAP architecture modeling tool. According to the actual requirements, we mainly select some OV and SV description models to describe the architecture, complete the construction of the description model of the relevant perspective, apply the methods of architecture development software and executable model architecture verification and evaluation, verify the rationality, integrity and consistency of the architecture, and clarify the logical mapping and information interaction of each perspective model. The relationship between them can provide ideas for the construction of simulation training system of future shooting range optical measurement and play a supporting role for the development and construction of future shooting range optical measurement system.
References 1. Zhou, J.: Study on Optical Measurement Techniques Based on Unstable & Moving Platforms in Shooting Range. National University of Defense Technology, Changsha (2012). (in Chinese) 2. Wang, C.L., Mi, Y., Yu, X.B.: Design of emulation system for optical measuring equipment in shooting range. Measur. Control Technol. 27(4), 76–78 (2008). (in Chinese) 3. Zhou, H., Zhao, M.Q.: Study on correction of atmospheric refraction error of optical measurement data in navy range. Comput. Simul. 29(9), 6–9 (2012) 4. Fu, X.: Optical Design and Realization of Distributed Laser Gas Measurement and Analysis System. College of Optoelectronics Engineering of Chongqing University, Chongqing (2017). (in Chinese)
Design of Optical Measurement Simulation Training System
1333
5. Gao, X., Wang, J.L., Zhao, J.Y., et al.: Application of phase diversity technology in range optics equipment. J. Spacecraft TT&C Technol. 31(3), 27–30 (2012) 6. Xiong, Z.H.: Application of high-speed pickup in measurement system for shooting range. Chin. Measur. Test 38(1), 82–89 (2012). (in Chinese) 7. Zhao, Z.X.: Research on Technology for Motion Parameters Measurement Using Linear Array Optical Image and Its Applications. National University of Defense Technology, Changsha (2012). (in Chineses) 8. Zhang, J., Li, J.C., Zhang, Y.F., et al.: Application of camera calibration using LSSVM in range measurement. Opto-Electron. Eng. 38(10), 20–26 (2011). (in Chineses) 9. Zhang, M.M., Chen, H.H., Mao, Y., et al.: An approach to measuring business-IT alignment maturity via DoDAF2.0. J. Syst. Eng. Electron. 31(01), 95–108 (2020) 10. Yang, W.J.: Research on weapon and equipment requirement analysis method based on DODAF. In: Jilin Province Science and Technology, pp. 102–104 (2019) 11. Mye, S., Sung, J., Taehoon, K., et al.: Development supporting framework of architectural descriptions using heavy-weight ontologies with fuzzy-semantic similarity. Soft. Comput. 21(20), 6105–6119 (2017) 12. Ronald, E.G.: Evaluation of the DoDAF meta-model’s support of systems engineering. Procedia Comput. Sci. 61, 254–260 (2015) 13. Matthew, H., Lars, K.: All for the want of a horseshoe nail: an examination of causality in DoDAF/MODAF. INCOSE Int. Symp. 24(1), 535–550 (2014) 14. Andre, B., Luz, T.C., Dario, J.: Executable architecture based on system dynamics: an integrated methodology composed by standard system dynamics modelling and DoDAF operational view models. Procedia Comput. Sci. 36, 87–92 (2014) 15. Thorisdottir, A.S., Julia, E., et al.: Factor structure and measurement invariance of the alcohol use disorders identification test (AUDIT) in a sample of military veterans with and without PTSD. Subst. Use Misuse 55(8), 1370–1377 (2020) 16. Callina, K.S., Burkhard, B., Schaefer, H.S., et al.: Character in context: character structure among United States Military Academy cadets. J. Moral Educ. 48(4), 439–464 (2019) 17. George, M., Athos, A., Kyriacors, T., et al.: Field spectroscopy for the detection of underground military structures. Taylor Francis 52(1), 385–399 (2019)
Equipment Data Integration Architecture Based on Data Middle Platform Qi Jia1,2(&), Jian Chen3, and Tie-ning Wang1 1
2
Army Armored Academy, Beijing 100072, China [email protected] Security Department of Northern War Zone Army, Jinan 250000, China 3 78090 Troops of PLA, Chengdu 610031, China
Abstract. This paper proposes a solution for real-time data integration of complex equipment in cloud computing environment, which involves data collection, preprocessing, integration, storage, intelligent analysis services and other life-cycle management. First, it analyzes the development status and serious challenges in the process of existing equipment data integration, puts forward the development direction that data comes from business and will also serve business, then analyzes the advantages of data middle platform in cloud computing data processing, the development prospect of edge computing integrated with intelligent processing, as well as the key technologies of combination of edge computing and data middle platform, finally according to equipment management. According to the application requirements and fusion characteristics of business and data fusion, the design scheme of data middle platform based on edge computing is proposed, and the equipment data integration architecture based on data middle platform is designed. Keywords: Data center
Edge computing Data integration
1 Introduction The military transformation construction puts forward higher requirements for the reliability and effectiveness of equipment and material support, and the construction of advanced and applicable equipment data integration processing architecture is an effective basic means to improve the capability of military material support [1]. In the process of data integration, how to establish a reasonable data dictionary system, build reasonable and effective data integration architecture, and make full use of the professional advantages of existing information processing is an urgent problem to be solved. This paper analyzes the existing cloud computing data in the middle of the technology and edge computing technology, and then combined with the characteristics of equipment data application requirements, build the equipment data integration architecture based on the data in the middle of the platform.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1334–1338, 2021. https://doi.org/10.1007/978-981-33-4572-0_192
Equipment Data Integration Architecture
1335
2 Data Center With the combination of data and business getting higher and higher, the requirements of upper application for data accuracy, security, real-time and other aspects are also getting higher and higher, and then the requirements for more accurate, faster and more economical data management ability are put forward [2]. Data comes from business, and should be fed back to further provide more effective data form for business processing, so the data and business are more and more closely [3]. Under the background of such development, a new idea of data processing has been put forward: taking business application as the service goal of data governance, taking data capitalization as the direction of data processing, providing data sharing service with unified and standardized access interface, completing the integration of data and business through the construction of intermediate processing platform, and achieving rapid response to user needs [4]. The core of this idea is the data intermediate processing platform, which is the data middle platform. The data middle platform is the bridge between the background data and the foreground business application scenarios. Specifically, in the equipment support cloud platform, the equipment processes and encapsulates the data into public data products according to the preset business data model, and provides a unified access interface to realize data-driven business innovation.
3 Edge Calculation Edge computing is a new computing mode, which belongs to distributed computing in essence. By deploying data processing mechanism near the edge of the network near the data source, edge computing can store, calculate and apply the data according to the application scenarios, so as to meet the real-time response requirements of users for the data [5]. Cloud computing integrates and shares physical resources and virtual resources to provide elastic services for upper applications. The focus is on non-real-time, long-term data processing and analysis. With the deepening of data quantity and complexity, the network load is increasing, which seriously affects the operation effect of cloud computing platform and is difficult to meet the real-time response requirements of users to data. Edge computing focuses on real-time, short period data analysis methods and scene decision-making applications. By sinking the core functions of data processing to the edge near the data source, it can effectively improve the real-time speed and accuracy of data while ensuring security [6]. Therefore, the introduction of edge computing into cloud computing platform, as an extension and expansion of cloud computing data processing means, can meet the decision-making analysis and data requirements of a variety of application scenarios. The implementation of edge computing focuses on two aspects, one is the means of edge node collaboration, the other is the deployment and implementation of edge nodes [7]. At present, the most important means of edge node collaboration is cloud edge collaboration. Edge nodes are deployed on the real-time data acquisition network side, such as various sensors, which are responsible for the real-time collection, collation, analysis and encapsulation of local data. The cloud is responsible for model training,
1336
Q. Jia et al.
data aggregation, data analysis, algorithm update, etc. The edge node and the cloud communicate in two directions according to the predetermined network protocol [8– 10]. Generally, the cloud sends the customized data model to the edge node. The edge node provides the user with decision-making information according to the different application scenarios, or after analyzing and processing the data locally, or after processing the local data, encapsulates all or part of the data, and transmits it to the cloud provides data resources for the data center.
4 Data Integration Architecture The equipment data integration architecture based on data in the middle platform is shown in Fig. 1. The data middle platform is located between the cloud bottom layer and the data application layer. As a channel for data upload and release, it realizes intelligent processing and unified encapsulation of data for business needs. The equipment data is collected and preprocessed at the bottom and then gathered in the data center. After being highly abstracted through conversion, aggregation, sharing and exchange in the data center, it is encapsulated into a new data processing form closely related to the equipment support business, which greatly reduces the workload of data processing in the business processing and realizes the rapid response to the user’s needs. Equipment data processing process based on Data Center: Data Aggregation. Data aggregation is the entrance of data access in data center. Data is generated in business system, log, file, network, etc. These data are scattered in different storage platforms of network environment, which have been used tenderly and are difficult to generate business value. The function of data aggregation is to collect and store the data in heterogeneous networks and heterogeneous data sources conveniently and accurately by means of database synchronization, web crawler and other methods, so as to build material basis for subsequent data processing. Data Development. Data aggregation implements data preprocessing and original integration, but the logical relationship between data has not been mined out. Data development consists of a set of data processing and processing control modules, which can quickly process data into a form with actual business value. In the data development stage, it provides offline, real-time, algorithm development tools, as well as a series of integrated tools for task management, code release, operation and maintenance, monitoring, alarm, etc., which are convenient to use and improve efficiency. Build Data System. The data system is the flesh and blood of the data center, and the development, management and use are all data. With the advent of the era of big data, the business and data are becoming more and closer. We must build a complete, standardized and unified global data system from the top level of strategy, generally in accordance with the standards of source data, unified warehouse, label data and application data. Data Asset Management. Asset management is to present data in a way better understood by business personnel. According to the requirements of access rights and security control, data asset management mainly includes data asset directory, metadata,
Equipment Data Integration Architecture
1337
data quality, data lineage, data life cycle, etc. for management and display, to present data assets in a more intuitive and concise way and improve the data awareness of business personnel. Data Service System. Data service system is to turn data into a kind of service ability. Through data service, data can participate in business and activate the whole data middle office. Data service system is the value of data middle office. It needs to be customized according to the business. The service module mainly provides fast service generation ability, service management and control, authentication, measurement and other functions.
Fig. 1. Equipment information integration architecture based on data in middle station
1338
Q. Jia et al.
5 Conclusion Starting from the background of equipment data application, this paper constructs the equipment data integration architecture based on the data in the middle platform, which provides a reference for the equipment data integration processing in the cloud computing environment. The next step focuses on the optimization of data processing process and personalized design of data application.
References 1. Li, W.J., Yang, X.Q.: A review of equipment support information system integration. Mil. Oper. Res. Syst. Eng. 32(2), 55–61 (2018) 2. Tan, H.: Explain on ODPS of Ali. Chin. Inform. Wkly. 014, 10–28 (2019) 3. Wang, T., Chen, M.J., Zhao, H.Y., et al.: Estimating a sparse reduction for general regression in high dimensions. Stat. Comput. 28(1), 33–46 (2018) 4. Stergiou, C., Psannis, K.E., Kim, B.G., et al.: Secure integration of IoT and cloud computing. Future Gener. Comput. Syst. 78, 964–975 (2018) 5. Zhang, J.-L., Zhao, Y.-C., Chen, B., Hu, F., Zhu, K.: Survey on data security and privacypreserving for the research of edge computing. J. Commun. 39(3), 1–21 (2018) 6. He, X.-L., Ren, Z.-Y., Shi, C.-H., Cong, L.: A cloud and fog network architecture for medical big data and its distributed computing scheme. J. Xi’an Jiaotong Univ. 50(10), 71– 77 (2016) 7. Xu, E.-Q., Dong, E.-R.: Analysis of nine application scenarios of cloud-edg collaboration. Commun. World 21, 42–43 (2019) 8. Zhang, C., Fan, X.-Y., Liu, X.-T., Pang, H.-T., Sun, L.-F., Liu, J.-C.: Edge computing enabled smart grid. Big Data Res. 5(2), 64–78 (2019) 9. Wang, Y., Chen, Q.-X., Zhang, N., Feng, C., Teng, F., Sun, M.-Y., et al.: Fusion of the 5G communication and the ubiquitous electric internet of things: application analysis and research prospects. Power Syst. Technol. 43(5), 1575–1585 (2019) 10. Zhang, X.-Z., Lu, S.-D., Shi, W.-S.: Research on collaborative computing technology in edge intelligence. AI-View. 5, 55–67 (2019)
Analysis of PBL Teaching Design for Deep Learning Yue Sun1, Zhihong Li1, and Yong Wei2(&) 1
2
Longgang School, Shanghai Foreign Studies University, Shenzhen 518116, People’s Republic of China Software School, Shenzhen Institute of Information Technology, Shenzhen 518172, People’s Republic of China [email protected]
Abstract. This paper discusses deep learning oriented PBL teaching design from the aspects of innovation of PBL teaching method, promotion of characteristic courses, integration of interdisciplinary content to form course resource and exerting the advantages of information technology application. Keywords: Deep learning
PBL teaching design Effective thinking
1 Introduction There is a certain tendency of shallow learning in the implementation of the new curriculum reform. In order to get out of the shackles of knowledge transfer, cultivate the profundity of thinking, and effectively overcome the shortcomings of mechanization, learning fragmentation, and shallow learning [1], many teachers have launched deep research. The concept of deep learning originates from the research of artificial neural network. Different from the traditional shallow learning, it emphasizes the depth of the model structure, clarifies the importance of characteristic learning, optimizes the classroom teaching design, and points to the core quality of the subject, so that students can learn to think actively and become the master of the classroom [2]. In order to improve the effect of PBL teaching design based on deep learning, the author consulted the relevant literature of deep learning from junior school to senior school and overseas PBL research status for guidance. The teaching environment design of deep learning in school has changed the students’ view of learning. The complex, deep-seated and multi-dimensional meaning construction activities can provide a reference operation method for the development of students’ subject core literacy [3]. Flipped classroom gives full play to the advantages of information technology, and lets students push the learning task list by teachers before class. Through the perception, experience, query, demonstration, application and expansion of knowledge to realize deep learning, so as to promote the occurrence of students’ deep learning, improve the quality of teachers’ teaching and the efficiency of students’ learning [4, 5].
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1339–1344, 2021. https://doi.org/10.1007/978-981-33-4572-0_193
1340
Y. Sun et al.
Senior school biology course is also based on the theory of deep learning, the practice design of senior school biology review teaching, to achieve better review effect, improve students’ high-level thinking ability and subject core literacy [6]. PBL teaching mode stimulates students’ enthusiasm and helps them build their own knowledge. PBL is not an easy teaching method. The realization of the project goal depends on the guidance of the instructor to a great extent [7]. Based on the previous research results, taking the project-based learning activities of “campus farm” in our school as an example, the author summarizes the research on PBL teaching and deep learning, points out how to strengthen the efficiency of PBL teaching design of deep learning, and points out the matters that should be paid attention to in the process of designing the project, and looks forward to the future research direction. In the implementation of the new curriculum reform, there is a certain tendency of shallow learning, that is, superficial learning style, superficial communication between teachers and students, and virtual teaching objectives. The critical learning mode of deep learning is not only concerned about the state of students’ autonomous learning and the way of learning for practical use, but also concerned about the solution effect of students’ tedious problems, so as to provide supplementary possibility for the shortcomings of shallow learning. The teaching design of deep learning should be carried out comprehensively around target orientation, content selection and teaching evaluation. How to strengthen the efficiency of PBL teaching design of deep learning is the following analysis and suggestions given by the author.
2 Deep Learning and PBL Teaching Method 2.1
Deep Learning
The so-called deep learning aims to build a valuable learning system, to understand, analyze and apply on the premise of memory, to critically absorb new knowledge according to the original cognitive level, to create a connection between knowledge points, and to make clear the measures to deal with problems through deep research and thinking. Deep learning can break through the current trend of mechanization and fragmentation in subject teaching, which is conducive to the improvement of teaching quality. 2.2
PBL Teaching Method
PBL is project based teaching mode. The term project embodies a strong institutional nature, which comes from Dewey’s educational concept. At first, PBL emphasized the main position of students, that is, students fully participate in learning activities and promote the further development of project-based teaching mode. After that, PBL is supported by many theoretical ideas, including the theory of human centered and constructivism. Aiming at the theory of human centered, PBL advocates the existence of dynamic function of nonintellectual factors with emotion as the core, and emphasizes the formation and cultivation of creativity.
Analysis of PBL Teaching Design for Deep Learning
1341
PBL mode encourages students to think in multiple directions and recognize natural things in all directions. Meanwhile, it divides these things into a whole, handles new situations and problems flexibly, and cultivates students’ creativity. According to the theory of constructivism, students, as the main body of information processing, are also the implementers of active construction on the level of knowledge. Knowledge is not indoctrinated by teachers, but the result of students’ communication and discussion under certain circumstances. The teacher is required to be a good transmitter of knowledge and a guide for students to construct knowledge. Under the background of PBL teaching mode, teachers should set up a good learning situation for students, expand the learning knowledge, guide students to find and innovate their own learning methods, fully highlight students’ main position through novel teaching concepts, and then achieve the perfect design goal of teaching design.
3 Basic Thinking of PBL Teaching Design for Deep Learning PBL is a learning method that pays attention to experience. Combined with the goal of deep learning PBL teaching design, based on the investigation and problem-solving activities of part of structural inappropriate problem design in real life, the designer is required to introduce learning into tedious problem scenarios, guide students to think about problems cooperatively, and promote learning to absorb scientific knowledge hidden behind problems. The purpose of this activity is to cultivate students’ awareness and ability of autonomous learning, involving cooperation with others, access to information, independent decision-making, critical thinking and problem-solving. PBL teaching focuses on introduction, discussion and evaluation, in which the introduction is mainly about learning objectives and learning suggestions, and the discussion is mainly about the interaction of related issues, and the teacher is inspired by the students’ questions; the evaluation is mainly about the project research process and results corresponding to PBL learning activities. In the implementation of project tasks, students should be encouraged to cooperate and explore independently. When students complete the basic knowledge construction during the project task processing, the difficulty is to control the amount of heterogeneous students’ tasks. Students are afraid of the difficult problems, but lack of the challenge for the small ones. In PBL teaching design, we should pay attention to the arrangement of heterogeneous students’ tasks under the deep learning mode. In addition, the PBL teaching design of deep learning should include three parts: task design, task teaching and effect evaluation. Among them, task design is divided into general task, promotion task and creation task; task teaching is to answer questions in the process of project research, coach students and teach effectively; effect evaluation is to evaluate students’ learning after submitting completed project research results and summarizing their experience in the process of research. The evaluation of PBL should be process, multi-level and multi-angle. During the specific evaluation period, it should be different from the concept of “one test for life”. It can not only evaluate by the research results of the project, but also emphasize the practical operation ability of students, introduce the assessment items of students’ own ability, handle the assessment indicators carefully, so as to evaluate students’ learning process and learning
1342
Y. Sun et al.
effect scientifically. The assessment and evaluation under PBL teaching mode should be combined with formative evaluation and process evaluation to ensure that the curriculum teaching evaluation is comprehensive enough. Therefore, students can complete the general project research tasks on their own, and selectively feedback the learned information to teachers in the form of project report; for the more complicated project research tasks, students can be encouraged to use modern information technology to summarize the project research process and research results and submit them to teachers for inspection, and teachers can transfer the excellent work content to teachers To the network, stimulate students’ desire for knowledge and their own thinking, guide students to accumulate learning experience in interaction, improve learning ability, and improve comprehensive quality. For example, in the teaching of “campus farm”, the design and application of PBL teaching method under the mode of deep learning can encourage students to think in groups, practice and research the campus farm project, let students collect data independently, give full play to students’ imagination and life experience, and let students tell others about their learning in the process of public display At the same time, according to the statistical summary of data, we can create new farm vegetable application software, or donate the vegetables we cultivate to the poor areas. Through PBL teaching method, establish the spirit of students’ team cooperation, let students divide their work independently, make clear the tasks that individuals should complete in project research, fully experience the process of project research and obtain corresponding knowledge and experience. Therefore, students are immersed in deep learning and research, and fully participate in the practical activities of “campus farm”, so as to achieve the teaching design goal of improving comprehensive quality.
4 Thinking of PBL Teaching Design for Deep Learning 4.1
Innovative PBL Teaching Method
The PBL teaching method of deep learning needs teachers to rebuild their teaching plans, complete interdisciplinary learning activities and meet the challenges of the times. We will reform and innovate the previous teaching mode, emphasize the improvement of students’ practical ability and interpersonal communication ability, train students’ creative thinking and critical thinking, ensure that students can maintain a positive attitude and full enthusiasm under the PBL teaching mode, and demonstrate the attraction and influence of the classroom on students. 4.2
Promoting Characteristic Courses
To promote characteristic courses, the education department can promote characteristic PBL courses in every region, regard the existing learning level of students as the basis, start with teaching and research, develop characteristic PBL course system, provide demonstration base and learning base for schools, train all students, promote PBL teaching content more suitable for students’ comprehensive quality training, attract students’ attention and focus To promote the deep development of education.
Analysis of PBL Teaching Design for Deep Learning
4.3
1343
Integration of Interdisciplinary Content to Form Curriculum Resource Library
In the design of PBL teaching mode for deep learning, it is necessary to design research projects in combination with the actual situation, organize students to sum up their growing experience in dealing with problems, promote the improvement of students’ comprehensive quality through PBL teaching method, find out the factors affecting students’ deep learning, improve the existing teaching system, and then form a highquality curriculum resource library. The school should construct characteristic curriculum structure, introduce PBL teaching innovative practice ideas, adhere to the concept of education based on people, commit to the formation of students’ comprehensive quality, with the help of high and new technology, meet the needs of the development of the times, ensure that students can develop into the backbone of social construction in the future, and better break through the limitations of talent development in the 21st century. 4.4
Give Full Play to the Advantages of Information Technology Application
With the development of network technology, teaching design in network environment has gradually developed into the focus of teaching reform. However, due to the restriction of objective factors in the specific teaching, including the authenticity of the teaching environment, the lack of infrastructure construction and other reasons, PBL teaching concept has not been widely promoted in schools, that is, teaching design is hindered. Compared with the previous task-based teaching and case teaching methods, PBL teaching method can better demonstrate its own value under the Internet mode. Through the Internet technology, it can expand the learning vision of students, enrich teaching resources, shorten the learning cycle of students, and achieve twice the teaching effect with half the effort. Therefore, PBL teaching method can be fully liberated from the traditional teaching ideas, which can give more vitality and vitality to classroom teaching.
5 Conclusion That is great significance and practical value to carry out the PBL teaching design and analysis project for deep learning. The concept of deep learning is expected to break through the traditional indoctrination teaching mode, enrich the teaching content and improve the teaching quality. PBL teaching method is a new type of teaching method, which highlights in the “task” level, makes students clear the learning objectives, touches the students’ thinking change through the research of projects, arouses students’ enthusiasm through the projects that students are interested in, excavates students’ potential, strengthens the effectiveness of PBL teaching design while achieving the goal of deep learning, and then cultivates more excellent talents.
1344
Y. Sun et al.
Acknowledgements. This paper was supported by the Shenzhen Science and Technology PlanBasic Research Project “Research on distributed parallel algorithm of group intelligent deep learning in big data environment (project number: JCYJ20190808100203577)”.
References 1. Mei, W.: Design of chemistry teaching to promote students deep learning—taking the teaching of calculation of relative molecular mass as an example. Middle Sch. Teach. Reference 32, 57–59 (2018). (in Chinese) 2. Cheng, C.: Research on the optimization of classroom teaching design based on deep learning —take the properties of particles as an example. Good Parents 13, 172–172 (2018). (in Chinese) 3. Meng, Y.: Teaching design for in-depth learning—horizontal projectile motion teaching cases. Middle Sch. Phys. 3, 32–34 (2019). (in Chinese) 4. Yang, Z.: The design and implementation of flipped classroom teaching directed to deep learning. Phys. Bull. 38(9), 39–43 (2019). (in Chinese) 5. Qiang, L.: Teaching design of layer mask based on deep learning in flipped classroom. Comput. Knowl. Technol. Acad. Ed. 7, 120–121 (2019). (in Chinese) 6. Zhou, X.: Teaching design of genetic engineering review from the perspective of deep learning. Coll. Entrance Examination 9, 26–26 (2019). (in Chinese) 7. Liu, G., Chen, L., Li, Y.: A review of overseas project-based learning teaching model. High. Architect. Educ. 4, 44–50 (2014). (in Chinese)
Oscillation of Half-linear Neutral Delay Differential Equations Ping Cui(&) School of Mathematics and Statistics, Qujing Normal University, Qujing, Yunnan, China [email protected]
Abstract. In this article, by using the generalized Riccati transformation and the integral average skill, a class of half-linear neutral delay differential equations are researched. A new oscillation criteria are obtained, which generalize and improve the results of some literatures. Keywords: Half-linear neutral delay differential equation criterion Distributed delay
Oscillation
1 Introduction In this article, we consider the oscillation of half-linear neutral delay differential equations, that is ðuðiÞj-0 ðiÞj
s1
-0 ðiÞÞ0 þ
Z
s
pði; fÞjmðxði; fÞÞjj1 mðxði; fÞÞdlðfÞ ¼ 0; i i0
ð1Þ
r
Where s; j is a constant and -ðiÞ ¼ qðiÞmðdðiÞÞ þ mðiÞ. We assume that the following conditions hold: (H1 ) qðiÞ 2 CðD; RÞ; D ¼ ½i0 ; 1Þ; 0 qðiÞ 1; pði; fÞ 2 CðD ½r; s; R þ Þ; Ri 1 (H2 ) uðiÞ 2 C1 ðD; RÞ , 0\uðiÞ , 0 u0 ðiÞ , JðiÞ ¼ i0 us ðsÞds; lim JðiÞ ¼ 1; i!1
(H3 ) xði; fÞ 2 CðD ½r; s; R þ Þ is nonreducing with respect to f and i, i xði; fÞ, lim min fxði; fÞg ¼ 1, xði; rÞ xðiÞ 2 C1 ðD; R þ Þ;
i!1 f2½r;s
(H4 ) dðiÞ 2 C1 ðD; RÞ , dðiÞ i , lim dðiÞ ¼ 1 . lðfÞ 2 ð½r; s; RÞ is nonreducing. i!1
The integral in Eq. (1) is a Stieltjesone. Neutral differential equations appear in mathematical models of high-speed computer non-destructive transmission networks, and have been widely used in the study of the elastic mass of rods, inertia in neuromechanical systems, and automatic control theory. Therefore, neutral differential equations have received widespread attention. At present, for the oscillatory of half-linear neutral delay differential equations, there are a lot of papers for study it [1–10]. The Kamenev and Philososcillation criteria of Even Order Half-linear neutral delay differential equations are given by the Refs. [5].
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1345–1351, 2021. https://doi.org/10.1007/978-981-33-4572-0_194
1346
P. Cui
Correspondingly, in this article we will give the new oscillation criteria of half-linear neutral delay differential Eq. (1).
2 The Basic Lemma Lemma 1. If mðiÞ is an eventually positive solution of Eq. (1), then we get the following conclusions: ð1Þ0\-0 ðtÞ; ð2Þ0 -00 ðtÞ; ð4ÞðuðiÞð-0 ðiÞÞs Þ0 þ !ðiÞ-j R s ð3Þð1 qðiÞÞ-ðiÞ mðiÞ; j ðxðiÞÞ 0 where !ðtÞ ¼ r pði; fÞð1 qðxði; fÞÞÞ dlðfÞ. Proof. The proof of (1)–(3) is given in [10 Lemma 1]. (4) Eq. (1) can be rewritten as s
ðuðiÞð-0 ðiÞÞ Þ0 þ
Z
s
pði; fÞmj ðxði; fÞÞdlðfÞ ¼ 0
ð2Þ
r
From conditions (3), we get mj ðiÞ ð1 qðiÞÞj -j ðiÞ
ð3Þ
Substituted on the type, we obtain s
ðuðiÞð-0 ðiÞÞ Þ0 þ
Z
s
pði; fÞð1 qðxði; fÞÞÞj -j ðxði; fÞÞdlðfÞ 0;
ð4Þ
r 0
s 0
Z
j
s
ðuðiÞð- ðiÞÞ Þ þ - ðxði; aÞÞ
pði; fÞð1 qðxði; fÞÞÞj dlðfÞ 0:
ð5Þ
r
Let xðiÞ ¼ xði; rÞ, !ðiÞ ¼
Rs r
pði; fÞð1 qðxði; fÞÞÞj dlðfÞ, then
ðuðiÞð-0 ðiÞÞs Þ0 þ !ðiÞ-j ðxðiÞÞ 0:
ð6Þ
Lemma 2. If mðiÞ is an eventually positive solution of Eq. (1), and Z i lim
i!1
i0
1 uðsÞ
Z u
1
!ð#Þd#
1s
du ¼ 1;
ð7Þ
then -ðiÞ ! 1. Lemma 3. Let h [ 0; A [ 0; B 0, then Bu Au
hþ1 h
hh Bh þ 1 ðh þ 1Þh þ 1 Ah
:
Lemma 4. If mðiÞ is an positive solution of Eq. (1), /ðiÞ 2 C1 ðD; R þ Þ, /0 ðiÞ 0, 0 ðiÞÞs where D ¼ ði0 ; 1Þ, R þ ¼ ð0; 1Þ, let XðiÞ ¼ /ðiÞ uðiÞð-j ðxðiÞÞ , then
Oscillation of Half-linear Neutral Delay Differential Equations
1347
when j s, we have X0 ðiÞ
/0 ðiÞ 1 XðiÞ /ðiÞ!ðiÞ LðiÞX1 þ s ðiÞ; /ðiÞ
ð8Þ
Where LðtÞ ¼ sx0 ðiÞð/ðiÞuðiÞÞs . 1
3 Main Results Let M0 ¼ fði; tÞ : i0 t\ig; M ¼ fði; tÞ : i0 t ig, and N 2 CðM; RÞ satisfy the ði;tÞ 0; and following conditions: (1) Nði; iÞ ¼ 0; i0 i; 0\Nði; tÞ; ði; tÞ 2 M0 ; (2) @N@t N is continuous on M0 ; We call the function N has the property P Denote Nði; tÞ 2 X. Theorem 1. j s, let Nði; tÞ 2 X; N 2 CðM0 ; RÞ; such that @Nði; tÞ /0 ðtÞ s þ Nði; tÞ ¼ Ns þ 1 ði; tÞgði; tÞ; @t / ð tÞ and
o i;tÞ lim inf NNðði;i [0 ; 0Þ t i0 i!1 sþ1 R i jgði;tÞj 1 ( T2 ) lim sup Nði;i W s ðtÞ dt\1 ; 0 Þ i0
( T1 ) 1 inf
n
i!1
s
1 (T3 ) there exist wðiÞ 2 CðD; RÞ; such that lim sup Nði;i 0Þ i!1
sþ1
jgði;tÞj dt wði0 Þ ðs þ 1Þs þ 1 Wss ðtÞ
( T4 ) lim sup i!1
ð9Þ
Ri
i0
Ri
i0
½Nði; tÞ/ðtÞ!ðtÞ
; sþ1 s
w þ ðtÞWs ðtÞdt ¼ 1. w þ ðtÞ ¼ maxfwðtÞ; 0g,
Where Ws ðiÞ ¼ ð/ðiÞuðiÞÞs x0 ðiÞ , then Eq. (1) oscillatory. 1
Proof. Let mðtÞ is nonoscillatory solution of Eq. (1), Let’s mðtÞ [ 0 suppose, from Lemma 4, we have
X0 ðiÞ
/0 ðiÞ 1 XðiÞ LðiÞX1 þ s ðiÞ /ðiÞ!ðiÞ /ðiÞ
ð10Þ
1 /0 ðiÞ XðiÞ LðiÞX1 þ s ðiÞ X0 ðiÞ: /ðiÞ
ð11Þ
so /ðiÞ!ðiÞ
Integrating the above inequality (11) from i0 to i, we get
1348
P. Cui
/0 ðtÞ 0 1 þ 1s Nði; tÞ/ðtÞ!ðtÞdt Nði; tÞ X ðtÞ þ XðtÞ LðtÞX ðtÞ dt /ðtÞ i0 i0 ð12Þ Z i Z i s 1 þ 1s s þ 1 Nði; i0 ÞXði0 Þ þ sWs ðtÞNði; tÞX ðtÞdt jgði; tÞjN ði; tÞXðtÞdt
Z
Z
i
i
i0
i0
Thus there 1 Nði; i0 Þ
Z
i i0
Z i 1 /ðtÞNði; tÞ!ðtÞdt Xði0 Þ þ jgði; tÞj Nði; i0 Þ i0 Z i s 1 þ 1s s þ 1 sWs ðtÞNði; tÞX ðtÞdt : N ði; tÞXðtÞdt
ð13Þ
i0
From Lemma 3, we get Z
i
i0
Z /ðtÞNði; tÞ!ðtÞdt Nði; i0 ÞXði0 Þ þ
jgði; tÞjs þ 1
i i0
ðs þ 1Þs þ 1 Wss ðtÞ
dt:
ð14Þ
So 1 Nði; i0 Þ
Z i" Nði; tÞ/ðtÞ!ðtÞ i0
#
jgði; tÞjs þ 1
ðs þ 1Þs þ 1 Wss ðtÞ
dt Xði0 Þ:
ð15Þ
set AðiÞ ¼
1 Nði; i0 Þ
BðiÞ ¼
1 Nði; i0 Þ
Z
i
i0
Z
i
i0
s
jgði; tÞjXðtÞNs þ 1 ði; tÞdt;
ð16Þ
sWs ðtÞNði; tÞX1 þ s ðtÞdt:
ð17Þ
1
We get BðiÞ AðiÞ Xði0 Þ
1 Nði; i0 Þ
Z
i i0
Nði; tÞ/ðtÞ!ðtÞdt:
using condition (T3 ) we have Ri 1 (1) XðiÞ wðiÞ ; (2) wði0 Þ lim sup Nði;i Nði; tÞ/ðtÞ!ðtÞdt: 0 Þ i0 i!1
ð18Þ
Oscillation of Half-linear Neutral Delay Differential Equations
1349
thus we have lim inf½BðiÞ AðiÞ
i!1
1 Xði0 Þ lim sup Nði;i 0Þ
Ri
i!1
i0
Nði; tÞ/ðtÞ!ðtÞdt
ð19Þ
Xði0 Þ wði0 Þ\1: R1 1þ1 R1 1 If i0 X1 þ s ðtÞWs ðtÞdt\1; then i0 w þ s ðtÞWs ðtÞdt\1; which contradicts the condition (T4 ). R1 1 If i0 X1 þ s ðtÞWs ðtÞdt ¼ 1; then by (T1 ) there exist g such that h i Ri 1þ1 Nði;tÞ l s ðtÞW ðtÞdt inf lim inf Nði;i i i0 ; l [ 0, [ g [ 0; thus s i0 X sg, 0Þ t i0 i!1
so Ri 1 s Ws ðtÞNði; tÞX1 þ s ðtÞdt BðiÞ ¼ Nði;i 0 Þ i0 Rt 1þ1 Ri s s ðuÞW ðuÞdu Nði; tÞd X ¼ Nði;i s i0 0 Þ i0 hR ii R i @Nði;tÞR t 1 þ 1 t 1 þ 1s s s s ðuÞW ðuÞdu dt ¼ Nði;i0 Þ Nði; tÞ i0 X ðuÞWs ðuÞdu Nði;i X s i0 @t 0 Þ i0 i0 R l i @Nði;tÞ s [ Nði;i dt @t 0 Þ sg i0
0Þ ¼ lNði;i gNði;i0 Þ [ l:
ð20Þ so lim BðiÞ ¼ 1, as lim inf ðBðiÞ AðiÞÞ\1, so lim AðiÞ ¼ 1. i!1
i!1
i!1
Consider the sequence fii g1 i¼1 ðL; 1Þ, ii ! 1, then lim ½Bðii Þ Aðii Þ ¼ lim inf ½BðiÞ AðiÞ\1; i!1
i!1
ð21Þ
Exist M [ 0, when i is sufficiently large, we have Bðii Þ Aðii Þ M, i.e. ÞBðii Þ M [ Bði [ 12, i.e. Aðii Þ Bðii Þ M, so Aðii Þ ! 1, we have AðiiBði iÞ iÞ As ðii Þ Bs ðii Þ
[ 21s , so Z c
d
As þ 1 ðii Þ Bs ðii Þ
[
Aðii Þ 2s
! 1: From the inequality Z
jf1 ð1Þf2 ð1Þjd1 c
d
jf1 ð1Þjr d1
1r Z
d c
jf2 ð1Þjs d1
1s
;
1 1 þ ¼ 1: r s
ð22Þ
1350
P. Cui
we have Aðii Þ ¼ ¼
1 Nðii ; i0 Þ
Z
Z
1 Nðii ; i0 Þ
ii
i0
ii
i0
Z ii 1 s jgðii ; tÞjNs þ 1 ðii ; tÞXðtÞdt Nðii ; i0 Þ i0 ! jgðii ; tÞj s þs 1 s Ws ðtÞNs þ 1 ðii ; tÞXðtÞ dt s Wss þ 1 ðtÞ
jgðii ; tÞjs þ 1 dt Wss ðtÞ
!s þ1 1
1 Nðii ; i0 Þ
Z
ii
i0
Ws ðtÞNðii ; tÞX
sþ1 s
ðtÞdt
s þs 1
: ð23Þ
so
A
sþ1 s
1 1 ðii Þ s Nðii ; i0 Þ
Z
ii
i0
jgðii ; tÞjs þ 1 dt Wss ðtÞ
!1t
s Nðii ; i0 Þ
Z
ii i0
sþ1 s
Ws ðtÞNðii ; tÞX
ðtÞdt : ð24Þ
thus sþ1
A s ðii Þ 1 1 Bðii Þ s Nðii ; i0 Þ
Z
ii
i0
jgðii ; tÞjs þ 1 dt Wss ðtÞ
!1s :
ð25Þ
which implies that As þ 1 ðii Þ 1 1 s Bs ðii Þ s Nðii ; i0 Þ
Z
ii
i0
jgðii ; tÞjs þ 1 dt ! 1: Wss ðtÞ
ð26Þ
so 1 lim i!1 Nðii ; i0 Þ
Z
jgðii ; tÞjs þ 1 dt ¼ 1: Wss ðtÞ
ii i0
ð27Þ
Letting i ! 1, and take the upper limit, we get lim sup
i!1
1 Nði; i0 Þ
Z
i
i0
jgði; tÞjs þ 1 dt ¼ 1; Wss ðtÞ
ð28Þ
Which contradicts the condition( T2 ), Thus mðiÞ is not a positive solution of Eq. (1). Similarly, mðiÞ is not a negative solution of Eq. (1). Therefore, Eq. (1) is oscillatory. Note: We discussed the situation when j s, and drew the main conclusion, namely Theorem 1. According to the method in Sect. 3, we can discuss the situation when s j accordingly, and draw the corresponding conclusion.
Oscillation of Half-linear Neutral Delay Differential Equations
1351
Acknowledgements. This research has been supported by the Joint Special Fund for Fundamental Research of Local Undergraduate Universities(Partial) in Yunnan Province (Grant No. 2019FH001(-083)),the Science Research Fund of the Education Department of Yunnan Province (Grant No. 2020J0630) and the Yue-Qi Scholar of the China University of Mining and Technology (Grant No. 102504180004).
References 1. Hsuh, B., Yeh, C.C.: Oscillation theorems for second-order half-linear differential equations. Appl. Math. Lett. 9(6), 71–77 (1996) 2. Wang, Q.R.: Oscillation and asymptotics for second-order half-linear differential equations. Appl. Math. Comput. 122(2), 253–266 (2001) 3. Xu, R., Meng, F.: Some new oscillation criteria for second order quasi-linear neutral delay differential equation. Appl. Math. Comput. 182(1), 797–803 (2006) 4. Yang, X.: Oscillation criterion for a class of quasilinear differential equation. Appl. Math. Comput. 153(1), 225–229 (2004) 5. Zhang, Q.X., Gao, L., Yu, Y.H.: Oscillation of even order half-linear neutral differential equations with distributed delay. Acta Math. Applicatae. Sin. 34(5), 895–904 (2011) 6. Zhang, Z.Y., Wang, X.X., Yu, Y.H.: Oscillation of third-order half linear neutral differential equations with distributed delay. Chin. J. Appl. Math. 38(3), 450–459 (2015) 7. Gao, L., Zhang, Q.X., Yu, Y.H.: Oscillation of even-order half-linear neutral differential equations with distributed delay and damping term. J. Syst. Sci. Math. Sci. 33(5), 568–578 (2013) 8. Lin, D.L.: Oscillation criteria for second order half-linear neutral delay differential equations. J. Anhui Univ. Nat. Sci. Ed. 39(1), 15–20 (2015) 9. Wang, J.Y., Xiang, S.X., Yu, S.H.: Oscillation criteria for fractional neutral differential equations. J. Southwest Univ. Nat. Sci. Ed. 36(11), 106–111 (2014) 10. Cui, P.: On oscillation of new generalized emden-fowler equation. J. Southwest Chin. Normal Univ. Nat. Sci. Ed. 41(1), 1–10 (2016)
Reconstruction Research of Ancient Chinese Machinery Based Virtual Reality Technology Hongjun Zhang1 and Kehui Deng2(&) 1
2
Textile Institute, Donghua University, Shanghai 201620, China College of Humanities, Donghua University, Shanghai 201620, China [email protected]
Abstract. Ancient Chinese machinery is an important part of ancient Chinese civilization. Through reconstruction research, we can better understand the past glory and enhance national self-confidence and pride. However, the traditional restoration research is lack of funds, research talents and unreasonable personnel structure. What’s more, the restoration research results presented in kind are not easy to preserve and display, so they can’t play their due benefits. Through the analysis and practical verification, it is found that virtual reconstruction research with the help of virtual reality and other technologies is an effective way to solve the problems encountered and make it sustainable. Keywords: Reconstruction research
Ancient machinery Virtual reality
1 Introduction China is one of the first countries in the world to use and develop machinery, and it has also maintained the world’s leading position for a long time, especially in agriculture, textile, astronomy and other fields, making a series of remarkable achievements, which is an important part of ancient Chinese civilization. However, due to its long history, few ancient machines can be preserved, and now only be seen in ancient books. On the one hand, with the improvement of scientific and cultural literacy, the Chinese government and people are paying more and more attention to ancient Chinese civilization and excellent Chinese tradition, on the other hand, in order to better understand the past glory, enhance national self-confidence and pride, learn from the past and the present, further promote the progress of science and technology in China, and create a more brilliant future, so it is necessary to study ancient Chinese machinery and carry out rehabilitation research. Reconstruction research is a research on the real purpose of restoring ancient cultural relics and an important method of the research on the history of science and technology. It is different from the design of imitation, reproduction and innovation significance [1, 2], which is divided into theoretical research and practical research. At present, the theoretical research has developed well, but the actual reconstruction research is in a dilemma. This paper will take the reconstruction of ancient machinery as the main line, on the basis of expounding the general process of traditional reconstruction research and reviewing the current situation of traditional reconstruction research, try to put forward the method of virtual reconstruction of ancient machinery by using information technology represented by virtual reality, and © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1352–1357, 2021. https://doi.org/10.1007/978-981-33-4572-0_195
Reconstruction Research of Ancient Chinese Machinery
1353
help the reconstruction research to break through the current difficulties, so as to achieve sustainable development in a more suitable track for the needs of the times.
2 The Present Situation of the Research on the Reconstruction of Ancient Machines With the development of society and the improvement of the international and people’s attention to ancient civilization, more and more ancient mechanical reconstruction issues have sprung up, many of which have great influence on the ancient machinery and the results it brings have not been fully studied and reflected, many of the historical scientific and technological questions are still pending, which need to be answered. It is gratifying that the national level and scientific research field have recognized these situations, and have invested some policy inclination and material support, but there are still some constraints and factors that are not conducive to the development of recovery research. First of all, the funding for scientific research is insufficient. The research of ancient mechanical reconstruction usually has a long period, and in the process of research, there are a lot of problems that are not encountered in theoretical research, but need to be solved urgently. Therefore, it is very difficult to achieve significant results in a short period of time, which will make the investment institutions worry a lot and will not invest sufficient funds to support. Secondly, the lack of personnel and unreasonable structure. Reconstruction research requires researchers to love traditional Chinese culture, have a solid cultural background and broad experience, and have a high sense of responsibility and mission. There were not many talents who could have these qualities. In addition to the current situation of the Reconstruction Research Institute, many of the original researchers engaged in reconstruction research changed their careers to more valuable industries in order to survive. In addition, due to the large number of subjects involved in the reconstruction research, and the current researchers are more concentrated in the field of science and technology history, the personnel structure is unreasonable, especially the lack of skilled model makers, which is currently mainly solved by temporarily hiring technicians. Because most of the technicians are temporary in nature and cannot form a lasting mechanism, the reconstruction research cannot be carried out sustainably. Finally, the results of the reconstruction research can’t be placed, displayed and effectively maintained. This is also the biggest problem encountered in the current reconstruction research, which directly or indirectly leads to the difficulty of large-scale reconstruction research and one of the important reasons for the current situation. The recovered objects built through hard research and energy should be displayed and kept in a proper place, so that they can be appreciated and experienced by more people, and they should be reasonably maintained. But the current situation is that there are no special museums and exhibition halls to discharge and display these recovery machines. At the same time, due to the people in the new era, especially the young people who lived in the information age, who do not understand the physical recovery machines, coupled with the backward display methods, people can’t experience them,
1354
H. Zhang and K. Deng
and the comprehensive results of recovery research can’t play publicity. The effect of passing on the scientific and cultural knowledge of ancient people in China can’t naturally produce the funds needed for its sustainable development. In a word, the state attaches great importance to the exploration and publicity of ancient Chinese science and technology culture to increase national self-confidence. In this environment, reconstruction research has obtained rare opportunities, which can be solved in terms of scientific research funds and talent policies, and has made some progress in the exhibition hall. These peripheral problems are gradually solved. More importantly, the reconstruction researchers should realize that only when the results of reconstruction research are viewed, concerned, participated and accepted by people, can the reconstruction research really play its role, and then can obtain corresponding economic benefits, and realize the sustainable development of the reconstruction research itself. Therefore, on the basis of inheriting the essence of the original research methods, the reconstruction research should give up the “sunny spring and white snow” and use more high-tech to carry out research. The research results should not be obstinately believed that they must be presented in kind and can exist in many ways, especially in the way of digitalization and virtualization, so that they can be widely spread and displayed through the Internet and cloud platform, which is more in line with the appreciation habits of contemporary people. Therefore, the author believes that virtual recovery research should be a more reasonable and effective way to break the dilemma of current reconstruction research.
3 Experiment of Reconstruction Research Based on Virtual Reality Technology 3.1
Introduction and Characteristics of Virtual Implementation Technology
Virtual reality (VR) is a synthesis of many technologies, including real-time threedimensional computer graphics technology, wide-angle stereo display technology, as well as touch, force feedback, stereo, network transmission, voice input and output technology. It is a computer simulation system that can create and experience virtual world. It uses computer to generate a simulation environment, which is an interactive three-dimensional dynamic scene and entity behavior system simulation based on multi-source information fusion. It enables users to immerse in the new technology of the environment [3]. Using virtual reality technology to complete the real representation of ancient machinery and cultural relics has become a research method commonly used in the world. Virtual reality technology has the following four characteristics, namely immersion, conceptual, multi-sensibility and interactivity [4]. 3.2
Definition of Virtual Reconstruction Research
The definition of virtual reconstruction research is to use the advanced information technology such as virtual reality to realize the virtual reappearance of the real face of the object without physical remains. Specifically, for the object to be restored, firstly
Reconstruction Research of Ancient Chinese Machinery
1355
collect and sort out historical materials and documents, then judge and discriminate, and obtain scientific research data and conclusions on its shape and function. On this basis, use three-dimensional modeling technology to build the model of its components, repeatedly build the reconstruction experiment of the object in the virtual scene, and test, modify and perfect the theoretical research data and conclusion: after the construction of the virtual 3D model in line with historical facts, the virtual 3D model is imported into the virtual reality software. According to the real parameters and attributes given to each part of the object, the functions are verified and adjusted. Finally, the virtual wearable device can be used to operate and experience the object, and the immersive experience effect is achieved. Figure 1 is the general process diagram of ancient mechanical virtual reconstruction research.
Fig. 1. The general process of virtual reconstruction research of ancient machinery
The general process of virtual recovery research is only to adjust the construction recovery objects in the general process of traditional recovery research to the construction of virtual reality scene. Through such a change, we can completely avoid and solve the problems encountered by traditional reconstruction research. First of all, the original physical construction requires skilled carpenters. With the development of society, there are fewer and fewer young people engaged in traditional woodworking, fewer skilled people, and more and more high-quality talents who master modeling technology and virtual reality technology. The speed of building accurate virtual and digital mechanical prototype is faster, and the modification and adjustment are more flexible, the mechanical application scene constructed by virtual reality is in line with the modern people’s appreciation habits. People can enjoy and participate in it anytime and anywhere on the Internet and cloud platform, and get the experience of luxury, which can better reflect the effect of research results. In addition, because the research results of reconstruction exist in the form of digitalization, it is much more convenient than the traditional reconstruction research in terms of preservation and maintenance. 3.3
The Practice of Virtual Reconstruction Research of Ancient Machinery – Taking Ancient Hand Ginning Tools as an Example
Cotton is an important economic crop in ancient China. From cotton to cotton, there are at least five main technological processes, i.e. seedless, elastic cotton, spinning, dyeing and weaving [5]. It is the first process of cotton production and also a main link different from traditional hemp and silk spinning, which directly affects and restricts the production efficiency and quality of the whole cotton textile. So it is very important.
1356
H. Zhang and K. Deng
Fig. 2. Bottomless Cotton Jin
Fig. 3. Quadruped Cotton Jin
Fig. 4. Separate Jin
Therefore, it is of great significance to study its development history and recover important ginning tools. It belong to the category of historical materials without physical remains. Through sorting out the historical documents of ginning tools, it is found that ginning tools have basically experienced four generations of development process. The first generation is ginning with iron stick. The second generation is a non-sufficient mixer, as shown in Fig. 2 [6]. The third generation consists of four foot and three foot agitators, as shown in Fig. 3 [7]. The fourth generation is a separate mixer, as shown in Fig. 4 [8]. According to the general process of virtual reconstruction research, the theoretical research results and the size of the related components and other data are obtained in three-dimensional modeling software, such as 3ds max [9], the 3D model of ginning tool is built in 3ds max, as shown in Fig. 5, Fig. 6 and Fig. 7. This process is the same as the recovery operation with actual materials. After that, the built model and animation baking are exported from the 3D modeling software in.fbx format and imported into the virtual reality software such as unity3d [10]. The driving relationship and constraint mechanism between each part are re-established as required, with different cameras, we can observe ginning tools from different perspectives, and experience the wisdom of ancient working people more truly with the aid of equipment of augmented reality [11].
Fig. 5. Bottomless Cotton Jin
Fig. 6. Tripod Cotton Jin
Fig. 7. Separate Jin
4 Conclusion Ancient Chinese machinery contains the infinite wisdom of ancient Chinese people and is a bright pearl in the treasure house of Chinese culture. In order to increase national self-confidence and carry forward national history and culture, it is more necessary to explore these pearls in Chinese traditional culture and civilization than ever before. The
Reconstruction Research of Ancient Chinese Machinery
1357
traditional reconstruction research has made a great contribution to this project. However, due to the fact that its research results are in kind, it is not easy to keep, spread and be experienced, adapt to the development of the times and people’s appreciation habits for various reasons, and can’t play its original effects and benefits, which makes the traditional reconstruction research fall into an awkward situation. Through analysis and practice, it is found that using virtual reality technology as the representative of high-tech virtual reconstruction research can completely avoid and solve the problems encountered by traditional reconstruction research, which should be one of the ways to make reconstruction research achieve sustainable development. In order to attract more and more researchers of the history of science and technology to pay attention to this method. Acknowledgments. This research was supported by Textile Culture Research Base for Fundamental Research Funds for the Central Universities in China (20d111015).
References 1. Jing-yan, L., Hong-gen, Y.: Several theoretical problems about reconstruction research of ancient machinery. J. Tongji Univ. 29(6), 677–680 (2010). (in Chinese) 2. Jingyan, L.: Research on the Reconstruction of Ancient Chinese Machinery. Shanghai Science and Technology Press, Shanghai (2019). (in Chinese) 3. Jin, H., Dongqi, H., et al.: A survey on human-computer interaction in mixed reality. J. Comput.-Aided Des. Comput. Graph. 6, 869–879 (2016). (in Chinese) 4. Zhong, Z., Yi, Z., et al.: Survey on augmented virtual environment and augmented reality. Sci. Chin. Inform. Sci. 45(2), 157–180 (2015). (in Chinese) 5. Zhang, H., Deng, K.: Textual research on the historical position of cotton textile industry in shanghai area in yuan dynasty. Asian Soc. Sci. 16(6), 27–33 (2020) 6. Zhen, W.: Agricultural Book. Zhejiang People’s Art Press, Hangzhou (2015). (in Chinese) 7. Guangqi, X.: Complete Book of Agricultural Administration. Shanghai Ancient Books Publishing House, Shanghai (1979). (in Chinese) 8. Yingxing, S.: Heavenly Creations. Shanghai Ancient Books Press, Shanghai (2013). (in Chinese) 9. Kurin, R.: Safeguarding intangible cultural heritage in the 2003 UNESCO Convention: a critical appraisal. Mus. Int. 56(1–2), 66–77 (2010) 10. Zhang, H., Deng, K.: Research on the application of Unity3D in the protection and inheritance of intangible cultural heritage. Asian Soc. Sci. 15(11), 89–92 (2019) 11. Lianen, J., Fengjun, Z., et al.: 3D interaction techniques based on semantics in virtual environments. J. Softw. 17(7), 1535–1543 (2006). (in Chinese)
A Query Optimization Method of Blockchain Electronic Transaction Based on Group Account Liyong Wan1,2(&) 1
2
College of Artificial Intelligence, Nanchang Institute of Science and Technology, Nanchang, China [email protected] School of Software, Jiangxi Normal University, Nanchang, China
Abstract. When an electronic transaction occurs, the blockchain system needs to find out whether the accounts of both parties of the transaction exist. However, However, in the existing blockchain account storage solution, account information query efficiency is low, which seriously affects query performance and user experience. To this end, we propose a blockchain-based electronic transaction lookup scheme for grouped accounts. We combine the characteristics of Merkle Patricia Tree, by constructing the Merkle Patricia Tree account storage structure GMPT (Group Merkel Partricia Tree), to reduce the time of querying the accounts of both parties of the transaction during electronic transactions. At the same time, in order to efficiently cluster accounts, we improved the RFM model and K-Means clustering algorithm, and gave out the RFM model based on blockchain accounts and the improved account grouping algorithm AccountGroup. Experiments show that the blockchain electronic transaction query scheme based on account grouping proposed in this paper can effectively reduce the time to query the accounts of both parties in electronic transactions, optimize the query efficiency, and improve the transaction speed of the blockchain system. Keywords: Blockchain transaction Clustering
Group account Query optimization Electronic
1 Introduction In recent years, researchers at home and abroad have conducted some research on the optimization query technology of blockchain electronic transaction query, and have also formed corresponding research results. From the results of the current research, it mainly focuses on the optimization of proto-linguistic query, semantic query, intermediate database query and blockchain-based structure, etc. Xu Y, Zhao S, Kong L et al. [1] proposed an educational certificate blockchain (ECBC) that can support low latency and high throughput and provide a method for accelerating queries. ECBC has built a tree structure (MPT-Chain), which can not only provide effective queries for transactions, but also support historical transaction queries of accounts. MPT-Chain only shortens the update time of the account, and can speed © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1358–1364, 2021. https://doi.org/10.1007/978-981-33-4572-0_196
A Query Optimization Method of Blockchain Electronic Transaction
1359
up the efficiency of block verification. Y Li, K Zheng, Y Yan, Q Liu and others [2] developed Ether QL, which is an efficient query method for Ethereum [3] (a representative blockchain open source system). Ether QL provides efficient query primitives for analyzing blockchain data, including range queries and top-k queries, which can be flexibly integrated with other applications. Zhang L, Qinwei L I, Qiu L and others [4] studied the technical principles and application advantages of blockchain, and proposed the MC + NSC model based on poverty alleviation and the Hop-Trace application method for poverty alleviation. The model divides the data into three blockchains and two types of data storage. The method can improve the query efficiency of system applications. Morishima S, Matsutani H et al. [5] implemented query optimization from the perspective of GPU optimization. The paper introduces an array-based Patricia tree structure suitable for GPU processing, so that the characteristics of the blockchain can be effectively used to optimize the query effectiveness. Cai Weide [6, 7] and others proposed the dual-chain model of account blockchain (ABC) and transaction blockchain (TBC). Although this method achieves the goal of query optimization by improving the structure of the blockchain, the structure is too complicated, which brings difficulties to practical applications and increases maintenance costs. In this paper, we construct a Merkle Patricia Tree account storage structure GMPT (Group Merkel Partricia Tree) based on the perspective of grouping accounts in a blockchain electronic trading system. It is used to reduce the time of querying the accounts of both parties of the transaction during electronic transactions. At the same time, we cluster accounts into groups by improving the RFM model and the K-means clustering algorithm and optimize the storage structure of blockchain accounts. This optimization solution can overall improve the efficiency of querying the accounts of both parties to a transaction in order to increase the transaction speed of the blockchain system.
2 Design of Blockchain Group Account Storage Structure 2.1
Account Grouping Improved Ideas
We will use the improved RFM model [8, 9] and K-means [10, 11] clustering algorithm to complete the user classification, classify the blockchain accounts into features, and complete the account grouping. RFM is currently the most popular approach to account relationship management because the RFM model is an important tool for measuring account value and potential account value, and is the most effective and common approach to account relationship management adopted by most banks today. Through the indicator variables of the RFM model, the K-means clustering algorithm is used to classify the blockchain account and provide data for the next account storage. Although the K-means clustering algorithm can better achieve the classification of blockchain accounts, this algorithm involves how to select the K value and how to select the initial central data of each cluster [12]. For the initial selection of the central data of each cluster, combined with the actual situation of the blockchain, we have improved the currently used Max-Min-diatance method, and designed an initial center selection for the RFM model. Algorithm: Edge-Max-Min-diatance.
1360
L. Wan
According to the RFM model, the point closest to the corner will be selected as the center point of the first initial cluster, which can effectively reduce the number of iterations of K-means, accelerate the clustering time, and improve the accuracy of initial point screening. Next, select the point farthest from the center point of the first initial cluster as the center point of the second initial cluster, and then select the point closest to the center point of the first two clusters. Find the point farthest from this point as the center point of the third initial cluster, and so on, until k initial cluster center points are selected. 2.2
The Storage Structures and Algorithms of Group Account
First, all blockchain accounts are divided into k classes, and the value of k is determined based on the above improvement idea. The branches of the root node are set to k, and each branch represents a class of accounts. Blockchain accounts of the same group are stored in a leaf node under the same node at the second level. This storage structure is named GMPT (Group Merkel Partricia Tree). Through the above method, during the transaction, the number of queries for the accounts of both parties will be less than the number of queries in Ethereum. After the transaction is completed, modifying the account status, the number of times required to calculate the hash value will also be less than that required in Ethereum. The storage structure based on grouped account is shown in Fig. 1.
Hash1-4
Hash1-2
Hash3-4
Hash1
Hash2
Hash3
Hash4
Informantion of Account 1
Informantion of Account 2
Informantion of Account 3
Informantion of Account 4
Fig. 1. The storage structure based on grouped account
As shown in Fig. 1, the leaf nodes under hash 1–2 and hash 3–4 represent two groups of accounts, respectively, and the accounts of both parties of the transaction are the information of account 1 and account 3, and belong to the same account group hash 1–2. After the transaction, the hash value to be modified is 4, followed by hash 1, hash 2, hash 1–2, and hash 1–4. It can be seen that if the height of the tree is a little higher, then the query efficiency of the two blockchain account storage structures will have a more obvious difference. Therefore, the account storage structure based on group
A Query Optimization Method of Blockchain Electronic Transaction
1361
accounts proposed in this section is more conducive to the inquiry of the accounts of both parties during the transaction, and improves the inquiry efficiency.
3 Query Account Algorithm Based on Group Account Next, we design an algorithm for querying the accounts of both parties during a transaction based on account grouping. The account name designed in this study adds a suffix to the original account name hash to represent the grouping to which it belongs, e.g. 0x54325.1 represents the first group of accounts with account name 0x54325. By querying the hash value of the account names of the two parties of the transaction, it will find the set R added to the account, and determine whether the accounts of the two parties of the transaction exist through the set R. The algorithm is shown in Algorithm 1.
Algorithm 1.QueryAccount ( ) Input: The hash value H1, H2 of accounts s1, s2 of two parties to the transaction; Output: Array r of information for storing accounts s1, s2; (1) BEGIN (17) c [1] =k [1]; (2) if (compare (H1, H2) { (18) k [2] =Select (H2, n) (3) n=SearchHash (H1); (19) if (len (k [2])! =0) (4) m=Pre (H1, H2); (20) c [2] =k [2]; (5) if (lenstr (m)! =0) { (21) } (6) f=Select (m, n); (22)} (7) k[1]=Query (f, Del (H1, m)); (23) else { (8) if (len (k [1])! =0) (24) k [1] =SQuery (H1); (9) c [1] =k [1] (25) k [2] =SQuery (H2); (10) k[2]=query ( f,Del(H2,m)); (26) if (len (k [1])! =0) (11) if (len (len (k [2])! =0) (27) c [1] =k [1]; (12) c [2] =k [2]; (28) if (len (k [2])! =0) (13) } (29) c [2] =k [2]; (14) else { (30) } (15) k [1] =Select (H1, n); (31) return r; (16) if (len (k [1])! =0) (32) END Since the account status information of the blockchain system is constantly updated and new accounts are registered, the accounts need to be grouped regularly. The key to the account query algorithm based on account grouping is whether the accounts of both parties of the transaction can be clustered into the same account group. If the accounts of both parties of the query transaction are in a group at the same time, it will be more efficient than queries that are not in the same group.
1362
L. Wan
4 Experiment and Analysis 4.1
Experiment Environment
In this experiment, we take the Golang language as programming language. The hardware configuration of the experiment is as follows: the CPU is Intel i5-6500, 3.2 GHz, and the memory is 8 GB. The operating system is Windows 10 Professional Edition. The experimental data set is taken from ECOBALL’s public blockchain electronic transaction data set and account data set. 4.2
Experiment Results and Analysis
4.2.1 Performance Testing of Group Account Algorithm We conducted an experiment on the account grouping algorithm AccountGroup, and compared the number of iterations to group the blockchain account to verify whether its clustering effect has improved. The experimental results are shown in Table 1. Table 1. The comparison of clustering algorithm Algorithm Average number of iterations K-means clustering algorithm 10 Account Group clustering algorithm 7
The original k-means are random for the selection of K value. In this paper, we use the improved RFM account model and the maximum and minimum distance method of the edge. It can be seen that the iteration number of the AccountGroup clustering algorithm is significantly reduced, and the clustering algorithm is optimized. 4.2.2 Performance Testing of Query Account Algorithm Based on Group Account This section experiments with the SelectAccount query algorithm based on account grouping designed in this paper to verify whether the query efficiency has improved. First, after the account grouping algorithm AccountGroup groups the blocks, the storage accounts are grouped according to the Save Accounts algorithm, and then the account is queried in accordance with the query account algorithm SelectAccount. In the following, we compare the traditional account storage structure and query algorithm with the storage scheme and query algorithm proposed in this paper. The results of the experiment are shown in Fig. 2. The ETH and ECO in the figure are transaction time curves for transactions through the traditional account storage structures of ethereum and super ledger, and the SA is a transaction time curve for transactions using the account grouping-based blockchain electronic transaction lookup scheme proposed in this paper. It can be seen from the figure that the transaction time of ETH and ECO for electronic transactions is relatively high. In contrast, the transaction time of the research scheme in this paper for electronic trading is relatively low.
A Query Optimization Method of Blockchain Electronic Transaction
1363
Fig. 2. The comparison of transaction times
5 Conclusions In this paper, we propose a Merkle Patricia Tree account storage structure GMPT (Group Merkel Partricia Tree) based on the features of Merkle PatriciaTree and its corresponding storage account algorithm SaveAccounts, which reduces the time of querying accounts of both parties in electronic transactions. We also improve the RFM model and K-means clustering algorithm to design an improved account grouping algorithm AccountGroup, which facilitates the speedy and efficient clustering of accounts. The experiments verify the efficiency of the blockchain electronic transaction query scheme based on account grouping proposed in this paper, which achieves the purpose of improving the query efficiency and has good application value. Acknowledgements. This work was supported by Scientific Research Project Fund of Jiangxi Province under Grant no.GJJ191100 and Jiangxi Province Education Science Thirteen Five-Year Plan Project under Grant no. 20YB243.
References 1. Xu, Y., Zhao, S., Kong, L., et al.: ECBC: a high performance educational certificate blockchain with efficient query. Int. Colloquium Theor. Aspects Comput. 16(1), 290–295 (2017) 2. Yang, L., Kai, Z., et al.: EtherQL: a query layer for blockchain system. Int. Conf. Database Syst. Adv. Appl. 3(2), 559–567 (2017) 3. Wood, G.: Ethereum Yellow Paper. A secure decentralised generalised transaction ledger (2014). https://github.com/ethereum/wiki/wiki 4. Zhang, L., Qinwei, L.I., Qiu, L., et al.: Research on application development methodology based on blockchain. J. Softw. 28(6), 1474–1487 (2017). (in Chinese) 5. Morishima, S., Matsutani, H.: Accelerating blockchain search of full nodes using GPUs. Euromicro Int. Conf. Parallel Process. 22(04), 244–248 (2018)
1364
L. Wan
6. Cai, W.D., Yu, L., Wan, R., et al.: Research on application development methodology based on blockchain. J. Softw. 28(6), 1474–1487 (2017). (in Chinese) 7. Cai, W.D., Yu, L., Wan, R., et al.: Application of big data oriented blockchain in liquidation system. J. Bigdata 5(12), 25–30 (2018) 8. Liu, J., Du, H.: Study on airline customer value evaluation based on RFM model. Int. Conf. Comput. Des. Appl. 16(4), 287–292 (2010) 9. Kraft, D.: Difficulty control for blockchain-based consensus systems. Peer-to-Peer Networking Appl. 9(2), 397–413 (2016) 10. Peng, J., Guo, F., Liang, K., et al.: Searchain: blockchain-based private keyword search in decentralized storage. Future Gener. Comput. Syst. 15(4), 34–38 (2017) 11. Nakamoto, S.: Bitcoin: a peer-to-peer electronic cash system (2018). https://www.coindesk. com/bitcoin-peer-to-peer-electronic-cash-system 12. Swan, M.: Blockchain thinking: the brain as a decentralized autonomous corporation. IEEE Technol. Soc. Mag. 34(4), 41–52 (2015)
Face Recognition Based on Multi-scale and Double-Layer MB-LBP Feature Fusion Kui Lu1, Yang Liu2(&), and Jiesheng Wu1 1
College of Computer Science and Engineering, Anhui University of Science and Technology, Huainan 232001, China 2 College of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan 232001, China [email protected]
Abstract. Aiming at the reason that traditional face recognition algorithms are greatly affected by factors such as illumination and noise, a feature extraction method based on multi-scale and double-layer MB-LBP operator is proposed. First, extract three MB-LBP features with scales of 1*1, 2*2, and 3*3, respectively. Based on three different scale MB-LBP feature pictures, the MBLBP features of the second layer are extracted, and then the respective histogram features are counted and all features are combined into a higher dimensional feature. Principal component analysis (PCA) is used to reduce the dimensionality of the fused feature data. Finally, support vector machine (SVM) is used for classification. By testing on ORL and AR face databases respectively, this method has significantly improved the accuracy of face recognition compared to traditional methods, reaching 99.5% and 99.2% respectively. In addition, in order to verify the adaptability of the algorithm to light and noise, through the brightness adjustment and noise addition to the face data set, and then test and verify. while the recognition accuracy of the traditional method has dropped significantly, the recognition accuracy of the algorithm is still more than 98%. Keywords: Face recognition Vector Machine
Texture feature Feature fusion Support
1 Introduction In recent years, with the proposal of deep learning [1, 2], another upsurge of face recognition research has been set off, which greatly improves the accuracy of face recognition results. But, it has the disadvantages of large demand for training samples and long training cycle. Therefore, the idea of using feature extraction and redesign classifier still has a very broad research prospect. In recent years, quite a few efficient and novel algorithms have been proposed. Literature [3] uses FRR-CNN network to recognize faces, which not only improves the recognition accuracy, but also greatly reduces the network parameters. Literature [4] combined CNN network with RNN to carry out recognition research on facial expressions. Literature [5] proposed a method to realize single-sample face recognition by fusing LBP and HOG features. Literature [6] realized face recognition by combining LBP features and improved Fisher © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1365–1371, 2021. https://doi.org/10.1007/978-981-33-4572-0_197
1366
K. Lu et al.
algorithm. Literature [7] proposed a face recognition method combining PCA, LDA and SVM, and achieved a very good recognition effect. Literature [8] proposes a recognition method combining LBP, circular neighborhood LBP and MB-LBP. This paper proposes a feature extraction method based on multi-scale and doublelayer MB-LBP feature fusion, which improves the accuracy of face recognition and strengthens the ability to adapt to factors such as lighting and noise.
2 LBP 2.1
Basic LBP Operator
The basic principle of LBP Operator is as follows: take out a pixel in the image, take it as the center, and its pixel value as the comparison object. Compared with the 8 nearest pixels to the center pixel, the pixel value larger than the center pixel is set to 1, else 0. Thus, an 8 binary value can be obtained. Then, the eight values of 0 or 1 are combined into a binary number in order. The decimal number corresponding to the binary number is the LBP eigenvalue of the central pixel, which is used to represent the texture features of the pixel. Then repeat the above process for each pixel to get the LBP eigenvalue of the whole image. The implementation process of LBP Operator is shown in Fig. 1.
Fig. 1. Principle of LBP operator
2.2
MB-LBP Operator
The MB-LBP operator is improved based on the basic LBP operator. In the description of the operator, firstly, the whole image is divided into several n*n blocks, and one of the blocks with n*n scale is taken out, and the mean value of the pixels contained in the block is calculated, and the mean value is taken as the comparison object of other pixels in the region. Then repeat the above operation for the 8 blocks closest to the block and get the average value of the eight blocks respectively. Through comparison, we get the 8-bit binary value, which is the value of MB-LBP feature of the block. The Implementation process of MB-LBP operator is shown in Fig. 2. 2.3
Multi-layer LBP Feature
The multi-layer LBP feature picture means that the feature picture obtained by calculating the value of the LBP feature on the original image is called LBP feature picture of the first layer. Based on this LBP feature picture, the value of the LBP feature is
Face Recognition Based on Multi-scale and Double-Layer
1367
Fig. 2. Principle of MB-LBP operator
calculated again. The resulting feature picture is called LBP feature picture of the second layer. By analogy, a deeper LBP feature picture can be obtained. The deeper the LBP feature picture can extract the deeper LBP feature information, But with more layers, the feature information that can be extracted will become less. Therefore, when extracting deep LBP features, the number of fusion layers of LBP features will affect the final recognition effect. In this paper, the MB-LBP operator that fuse the first two layers is selected as the image feature. Figure 3 shows the extraction process of multilayer MB-LBP features.
Fig. 3. Multi-layer MB-LBP feature extraction process
2.4
LBP Histogram
If only the histogram of a single LBP feature picture is counted, the local information of the picture will be lost. So no matter what type of LBP feature is extracted, the image feature is calculated by the method of image partitioning. The single area after division is called a cell. The implementation process is as follows: (1) Extract the LBP feature picture of a single picture; (2) Divide the LBP feature picture into multiple cells, and count the LBP histogram of each cell separately; (3) Connect the histograms of all cells as the LBP histogram features of the picture (Fig. 4). 2.5
Feature Fusion
According to the above principles, multi-scale and double-layer MB-LBP histogram features are extracted. Various scales and different levels of MB-LBP histogram features are fused as the final extracted image features. In this paper, the fusion method is used to connect multiple MB-LBP histogram features into higher-dimensional composite features. Figure 5 shows the fusion process of different histogram features.
1368
K. Lu et al.
Fig. 4. MB-LBP feature images and histograms of different scales
Fig. 5. The fusion process of different histogram features
3 Principal Component Analysis The LBP feature dimension obtained by the above method is high, so we need to reduce the dimension of the data. PCA (principal component analysis), namely principal component analysis, is one of the most widely used dimensionality reduction algorithms, which is characterized by extracting useful information in the data based on the variance of the data. The data processed by PCA will be reduced from dimension to k dimension, and this new k dimension data retains most of the information in the original data.
4 Support Vector Machine The data after PCA dimensionality reduction is classified by SVM. SVM (Support Vector Machines) is one of widely used supervised learning classification model. SVM was originally proposed to solve the problems of binary classification and linearity. In the case of linear separability, finding an optimal segmentation surface makes the two types of data as separable as possible. The function expression is: w x þ v ¼ 0. To maximize the classification interval between the two categories, the following formula is required to be minimized:
Face Recognition Based on Multi-scale and Double-Layer
1 J ðwÞ ¼ kwk2 2
1369
ð1Þ
The constraint of formula (1) is yi ðw x þ bÞ 1, i 2 f1; 2; 3; . . .; ng.
5 Experiment and Result Analysis 5.1
Experimental Environment
The experiment used ORL face database, AR face database and ORL database after adding noise. The ORL database is divided into 40 categories, each category has 10 pictures of the same face, a total of 400 face pictures, the picture size is 112*92. The AR database is divided into 100 categories, each category has 26 pictures of the same face, a total of 2600 face pictures, the picture size is 165*120. In addition, some faces mask their facial features by wearing sunglasses or scarves. In view of the fact that this paper mainly studies the effects of light, expression, angle and noise on recognition, 12 of each category wearing sunglasses and scarves are removed, and 14 images are reserved for experiments. The third database is the ORL database with Gaussian noise added. It is used to test the adaptability of the algorithm to noise in this paper. 5.2
Experimental Procedure
Step1. Read the picture, Convert picture to grayscale, and then transform the picture size to 168*92, 336*213, 504*324. Step2. Three different sizes of pictures are corresponding to MB-LBP of different scales for feature extraction, Step3. Perform feature extraction on the three MB-LBP feature pictures again, and obtain a total of six MB-LBP feature pictures with different scales and different levels. Step4. Divide the six LBP feature pictures into 8*6 cells, and count the MB-LBP histogram features. Step5. First merge MB-LBP histogram features of different levels; then merge MBLBP histogram features of different scales. Step6. Perform PCA dimensionality reduction on the fused feature value data. Step7. Divide the reduced data into training set and test set, and use SVM to classify and test. 5.3
Experimental Results and Analysis
For the ORL database, 5 of the 10 images in each category were used as the training set, and the rest were used as the test set to perform two sets of experiments. The experimental results of the ORL database are shown in the table below. The experimental results are shown in Table 1.
1370
K. Lu et al. Table 1. Recognition accuracy of different algorithms on ORL database Methods PCA-SVM LBP PCA-LDA-SVM [7] LBP+MB-LBP [8] This paper
Number of samples Accuracy 5 93.70% 5 94.70% 5 97.4% 5 98.9% 5 99.50%
For the AR database, 5 of 14 images in each category were used as the training set, and the rest were used as the test set to conduct two sets of experiments. The experimental results are shown in Table 2. Table 2. Recognition accuracy of different algorithms on AR database Methods Number of samples Accuracy LBP+LDRC-Fisher [6] 5 75.20% LBP+MB-LBP [8] 5 94.10% This paper 5 98.60%
For the noise-added ORL database, 5 of the 10 pictures in each category are used as the training set, and the rest are used as the test set for experiments. The experimental results are shown in Table 3. Table 3. Recognition rate of different algorithms on ORL database after adding noise Methods Number of samples Accuracy PCA-SVM 5 83.20% LBP 5 86.70% PCA-LDA-SVM [7] 5 90.50% LBP+LDRC-FISHER [6] 5 85.20% LBP+MB-LBP [8] 5 96.30% This paper 5 98.60%
It can be seen from Table 1 that on the ORL database with little change in illumination, traditional PCA-SVM or standard LBP features can achieve good recognition results, but the accuracy rate is indeed difficult to exceed 95%. This paper has not only greatly improved compared with traditional methods, but also has a good improvement compared with other literature methods. It can be seen from Table 2 that the recognition accuracy of other literature methods has been significantly reduced on the AR database with improved illumination change and classification number, while the
Face Recognition Based on Multi-scale and Double-Layer
1371
method proposed in this paper has no obvious decline. As can be seen from Table 3, on the ORL database after adding noise, it can be seen from the experimental results that the traditional method is obviously not robust to noise. The recognition rate of other literature methods has also decreased in varying degrees, but the algorithm in this paper still maintains the obvious superiority.
6 Conclusion Through the above analysis, this feature extraction method based on weighted fusion multi-scale and double-layer MB-LBP features is proposed. This method not only has the ability to describe the texture of human face images with standard LBP features, but also has the ability to grasp the overall information. In addition, deep LBP feature extraction reinforces the standard LBP feature description capabilities. Finally, the accuracy of face recognition in normal environment is improved, and the ability to adapt to adverse factors such as light and noise is also strengthened. Acknowledgments. This paper is supported by the National Natural Science Foundation of China (51274011, 61772033).
References 1. Best-Rowden, L., Jain, A.K.: Longitudinal study of auto face MB-LBP recognition. IEEE Trans. Pattern Anal. MB-LBP Mach. Intell. 40(1), 148–162 (2018) 2. Lan, H., Jiang, D., Yang, C., Gao, F., Gao, F.: Y-Net: hybrid deep learning image reconstruction for photoacoustic tomography in vivo. Photoacoustics 20, 100197 (2020) 3. Xie, S., Hu, H.: Facial expression recognition with FRR-CNN. Electron. Lett. 53(4), 235–237 (2017) 4. Jain, N., Kumar, S., Kumar, A., et al.: Hybrid deep neural networks for face emotion recognition. Pattern Recogn. Lett. 115, 101–106 (2018) 5. Zheng, M., Fenglian, L., Riwei, W.: A sample face recognition method based on LBP-HOG feature fusion in sub-mode. J. Optoelectron.Laser 30(12), 1309–1316 (2019). (in Chinese) 6. Bing, L., Yan, X., Qiang, M.: Face recognition based on LBP feature and improved fisher criterion. Comput. Eng. Appl. 53(16), 155–160 (2017). (in Chinese) 7. Jingze, X., Zuohong, W., Yan, X., et al.: Face recognition based on PCA, LDA and SVM algorithms. Comput. Eng. Appl. 55(18), 34–37 (2019). (in Chinese) 8. Bing, L., Yan, X., Qiang, M.: Face recognition based on weighted fusion of LBP and MBLBP features. Comput. Eng. Des. 39(02), 551–556 (2018). (in Chinese)
Application and Challenge of Blockchain in Supply Chain Finance Tianyang Huang(&) College of Management and Economics, Tianjin University, Tianjin 300072, China [email protected]
Abstract. Along With the progress of the times and the rapid development of science and technology, the era of the Internet of Everything has arrived, and the rise of blockchain has opened a new chapter in technology. It has gradually been applied to the fields of finance, the Internet of Things, logistics, and public services, and has also developed rapidly in financial supply chain. With the support of blockchain technology, the digitization of accounts receivable, bills, warehouse receipts and other assets can be realized, and the data is retained to reduce the risk of ticket fraud and repeated pledge, alleviate the problem of information asymmetry, and reflect the blockchain. The smart contract properties of the company ensure the smooth development of supply chain financial services. However, blockchain technology has been widely used in the field of financial technology, but it still faces challenges in business, technology, and risk management. In order to better apply blockchain to the field of supply chain finance (SCF), this article starts from the status quo of SCF. Research and analyze industry pain points in traditional supply chain financing, and propose the application superiority of blockchain in SCF. At the same time, government agencies should actively adjust regulatory strategies to find a regulatory method suitable for the country’s development. Keywords: Supply chain finance
Blockchain Financing Financial data
1 Introduction Based on the continuous maturity of the supply chain model, supply chain finance has gradually linked more similar industries together. In this scenario, multiple institutions such as suppliers, core companies, and financial companies coexist, and the multi-agent model and non-high transaction frequent scenarios are more similar to the blockchain scenario. In response to financing difficulties, financing chaos and financing risks faced by SMEs, the use of blockchain technology can realize digital assets, and the application of blockchain can reduce the appearance of forgery, repeated promises and other risks. Open the trust of a supply chain financial transmission mechanism, improve asset liquidity, reduce the financing cost of small and medium-sized enterprises, deepen financial resources, and effectively help the development of the real economy.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1372–1377, 2021. https://doi.org/10.1007/978-981-33-4572-0_198
Application and Challenge of Blockchain in Supply Chain Finance
1373
2 The Development of Supply Chain Finance Different types of financing nodes are distributed in the network formed by supply chain finance, which provide a variety of financial services, such as credit loans and financial management consulting. It can stabilize the supply and marketing channels of core enterprises and help the weak companies in the supply chain [1]. The problem of financing difficulties, thereby increasing the utilization of funds. The essence is to solve financing problems by obtaining real trade background information in a timely manner, and help companies revitalize their liquid assets. This is the main difference between them and traditional financial services. The concept of SCF comes from abroad, besides it mainly includes three financing modes: accounts receivable mode, prepaid mode and movable property mortgage mode. It is currently developing rapidly in China, but the overall situation is still in its infancy. In the 1980 s, world-class corporate giants sought to minimize supply chain management costs in global procurement and outsourcing. With the deepening of global trade, supply chain management has gradually expanded from initial logistics and information flow management to solution management. The financial value bottleneck and the rediscovery of financial value to reduce financing costs prompted the emergence of SCF. Shenzhen Qianhai Development Bank is a pioneer in this field in China. By the end of 2001, it had carried out pilot inventory financing operations in its Guangzhou and Foshan branches. In 2006, it launched the “Supply Chain Finance” brand in the banking industry. Since then, the SCF model has been widely implemented in the domestic banking industry [2]. At this stage, both joint-stock commercial banks and large commercial banks have deployed supply chain finance departments. For example, CCB cooperated with Treasure Island, Dunhuang.com, and JD.com to develop online financing platforms; in 2012, Bank of China and JD.com launched a supply chain financial service platform. At present, in addition to commercial banks, some e-commerce platforms and other financial market participants have gradually entered the field of supply chain finance [3]. They have gradually come to the fore with their strong financial strength, flexible lending methods and big data resources, and have become market leaders. In 2013, Suning announced that it would fully open up supply chain financial services to SMEs. Its credit business has covered 7 well-known banks at home and abroad, such as Bank of Communications, Bank of China, China Everbright, Citigroup, Standard Chartered Bank, Ping An and HSBC. According to the investigation and analysis, there are three main types of domestic SCF at this stage: the conventional supply chain finance model, namely, China CITIC Bank + Haier Bank, Enterprise Cooperation Touch Network, Bank of Communications + Commercial Bank, and so on. Various e-commerce platforms provide various financial services for upstream and downstream companies in the supply chain. The “ecommerce + P2P” model integrates lending resources through cooperation and acquisitions to provide financing for SMEs and personal services Demands such as industrial and commercial loans, commercial loans and financing [4].
1374
T. Huang
3 Industry Pain Points in Traditional Supply Chain Finance The business data of risk assessment by upstream and downstream companies in the industry chain drives supply chain financing. The transparency and fluency of data flow is an important foundation for supply chain finance to play its role. In the actual operation of supply chain financial services, in addition to business operation risks, there are often data information asymmetry and transaction information forgery. The first is asymmetric information. In SCF, various information is unevenly distributed. The supplier’s cargo information is stored in the supplier’s warehouse information, the logistics company stores transaction information, and the banking system stores fund and cash flow information. These important information are all stored by the core Enterprise controller [5]. Therefore, the information imbalance is caused, and it is difficult for each participant to grasp the progress of the transaction. The efficiency of the supply chain system is seriously affected by the problem of information asymmetry, which ultimately impacts the construction of the credit system. Information asymmetry has seriously affected the interconnection of information, prompting financial companies to be more cautious in risk control. The second is the authenticity of the trade background. Commercial banks based on the actual trade situation of all parties in the supply chain in the context of the real economy, use the accounts receivable, prepayment and inventory generated in the transaction as pledge/collateral for downstream companies to provide services [6]. In the financing process, the actual transaction volume, accounts receivable and corporate guarantees are the fundamental guarantee for credit financing. However, in actual transactions, there are prone to problems in the transaction process, such as the possibility of using forged trade contracts for financing, the legality of accounts receivable, and the ownership of pledges [7]. Buyers and sellers use false transactions to maliciously take cash from banks, Or banks blindly grant credit to borrowers with no real trading background, which will bring great risks. The third is business risk. Supply chain finance has established the first source of repayment independent of corporate credit risk through self-compensating transaction structure design, professional operation process arrangements and independent thirdparty supervision. However, this undoubtedly puts forward high requirements on the strictness and standardization of the operation, and is prone to operational risks. Therefore, the integrity of the operating system, the rigor of operation and the implementation of operational requirements will directly affect the effectiveness of the source, which in turn determines whether the credit risk can be effectively prevented [8]. Fourth, financing costs are high. The financing cost of SMEs is another important reason restricting the development of SCF. The SCF involves a wide range of transactions and many companies. Financial institutions need to invest more time and capital costs to evaluate the authenticity of each transaction [9]. There are many nodes in the SCF, the geographical area involved is wider, leading to financial institutions It is difficult to track and investigate all transaction channels, and it is impossible to evaluate the value of products and services of various transactions. The long financing time and the increase in evaluation costs have brought unbearable financing costs to SMEs.
Application and Challenge of Blockchain in Supply Chain Finance
1375
4 Application of Blockchain in Supply Chain Finance In a blockchain-based solution, a consortium chain network can be established in a node-controllable manner, covering upstream and downstream companies, financial companies, financial institutions, banks and other trade finance participants. Then, link the transaction data of each node, and record the subject qualification, multi-frequency transaction, commodity circulation and other information through the blockchain. The purpose of the chain is to keep the nodes in sync, and financial institutions can obtain the real situation of the second and third level SME trade. Companies with financing needs need to register contracts and claims to ensure that these assets cannot be modified or duplicated after digitization. Finally, the transfer of these asset equity certificates in the alliance will achieve point-to-point connections and further enhance the liquidity of digital assets. The blockchain-based supply chain financial solution builds a unified industry data service authenticity verification method, reduces information asymmetry, and successfully launched a new service from the attributes of smart contracts [10]. First of all, the timestamp and data of blockchain technology cannot be modified, which can solve the authenticity of the trade background to a certain extent. From suppliers, core companies, distributors to logistics companies, warehousing regulators, financial institutions and other participants, blockchain technology can be used to form and share various transactions in every part of the supply chain-each transaction forms one The network node, that is, the node information is confirmed by the entire network, the logistics information is reflected by the geographic location information of the goods, and the fund information informs payees and financial institutions in time by updating payment information, accounts receivable information and accounts payable. The information is updated in a timely and accurate manner for both parties to the transaction and financial institutions, and provides warehousing and regulatory information to enterprises through digital information and financial institutions that provide movable property mortgage financing. Both parties obtained first-hand real and effective data from the source and established a new and reliable supply chain credit system, thereby alleviating the credit risk problem. Secondly, blockchain technology can improve the credit qualifications of entities in the SCF and reshape the credit system. In the traditional supply chain financing model, it is always dependent on the core enterprise, which is a centralized model. Blockchain technology has a decentralized feature, which can ensure the integrity and fluency of information between entities in the chain, improve the credit qualifications of each subject, and establish a distributed credit system. Through blockchain technology, the supply chain financial model can be further expanded, and the traditional 1+N model can be extended to the M+N model, so that SMEs are more conducive to obtaining financial services. Third, the smart contract attributes of the blockchain can be integrated into the supply chain financial business to improve the operational efficiency and risk control level of the entire supply chain. Smart contracts can provide application services for project establishment, due diligence, business approval, factoring agreement/contract signing, account registration and transfer, trade financing (loan issuance), post-loan
1376
T. Huang
management, account clearing and other factoring services. Help factoring companies establish and improve the Internet + financial business model, so as to more effectively improve their ability to acquire customers, participate in the industry, identify and control risks, and provide better financial services for upstream and downstream companies, thereby forming a complete Supply chain financial ecosystem.
5 Challenge and Thinking The application of blockchain effectively solves the risk control problem, promotes the balanced development of the industrial chain, builds a bridge of trust between enterprises. However, the development of this platform in actual financing operations is not mature enough, and there are still many problems to be solved in the financing platform of blockchain. First of all, the most important issue is compliance. Integrating the blockchain technology model into the supply chain needs to pay attention to whether this operation plan meets the regulatory requirements of different industries and whether it will bring compliance or legal issues. For example, in the actual operation of supply chain financing, banks pay more attention to the legal effect of the “transfer notice” of accounts receivable claims, and require primary suppliers or core enterprises to sign a “debt transfer agreement”, otherwise the bank will not grant credit. Therefore, blockchain-based solutions need to strictly abide by the research and application of current supply chain financial laws and regulations. Second, corporate data privacy management also faces challenges. The supply chain finance alliance chain expands the boundaries of effective collaboration between enterprises. At the same time, core companies are also worried about the leakage of core data such as data leakage, finance, taxation and employee salaries. Therefore, blockchain-based supply chain financial solutions need to improve privacy management techniques, such as adding grouping and hierarchical access control, setting up member node authority identity authentication, and avoiding transaction data leakage. Finally, the main model of SCF has not changed. The application of blockchain financial services is to provide more effective solutions for supply chain finance by using emerging information technology. The current core model of SCF dominated by enterprises will not change in the short term. Blockchain technology itself cannot solve the key problems faced by risk control such as moral hazard. The control of these risks depends on the dominant core enterprise.
6 In Conclusion Blockchain technology provides a new solution for the pain points in the development of SCF. It fills in the gaps in the development of the SCF, but it does not fundamentally change the commercial relationship between financial institutions. In the process of blockchain application development, we need to constantly explore its compliance and legal supervision issues, so that it can better play its value role. The “coin” application and the “chain” application need to be treated differently. For “coin” applications,
Application and Challenge of Blockchain in Supply Chain Finance
1377
financial risks should be strictly prevented. For “chain” applications, it should be subject to legal supervision. Explore the potential of blockchain within the framework. There are still many uncertain factors in the application of blockchain technology. Therefore, the practical application of blockchain technology must be promoted first, explore its advantages and disadvantages through the application of multiple scenarios, and continuously optimize the application mode to improve Its application value. The second is to focus on promoting conformity and teamwork between industries, jointly promote the linkage and cooperation of upstream and downstream entities in the industrial chain through industry associations and alliances, and strengthen cooperation with foreign industries. The communication of the main body enhances the right to speak in the development of international standards. Finally, pay close attention to the challenges that emerging technologies pose to the financial regulatory system. The application of blockchain should follow the core principles and rules of the financial industry, and should combine relevant market practices to conduct in-depth research on emerging technologies. Affected by the risk management model, supervision and legal framework, the application of technology must be based on the law and fully consider the applicability of existing laws and regulatory rules.
References 1. Swan, M.: Blockchain: Blueprint for a New Economy. O’Reilly, Newton (2015) 2. Sikorski, J.J., Haughton, J., Kraft, M.: Blockchain technology in the chemical industry: machine-to-machine electricity market. Appl. Ener. 195, 234–246 (2017) 3. Eyal, I.: Blockchain technology: transforming libertarian cryptocurrency dreams to finance and banking realities. Computer 50(9), 38–49 (2017) 4. Gelsomino, L.M., Mangiaracina, R., Perego, A., et al.: Supply chain finance: a literature review. Int. J. Phys. Distrib. Logistics Manag. 46(4), 348–366 (2016) 5. Stein, J.C.: Information production and capital allocation: decenralized versus hicarchical firms. J. Financ. 57(5), 1891–1921 (2002) 6. Porter, M.E.: The competitive advantage of nation. Harvard Bus. Rev. 68(2), 73–93 (1990) 7. Hedges, L.V.: Effect sizes in cluster-rangomized designs. J. Educ. Behav. Stat. 34(4), 341– 370 (2007) 8. Berger, A.N., et al.: Relationship lending and lines of credit in small finance. J. Bus. 63, 351–381 (1995) 9. Jonnson, A.: Blockchain revoltion: how the technology behind bitcoin is changing money, business and the world. Acuity 3(11), 65 (2016) 10. Boot, A.A., Thakor, A.V.: Moral hazard and secured lending in an infinititely repeated credit market game. Int. Econ. Rev. 35(4), 899–920 (1994)
Preparation and Function of Intelligent Fire Protection Underwear Based on ECG Monitoring Feng He, Jinglong Zhang, and Yi Liu(&) Beijing Institute of Fashion and Technology, Beijing 102249, China [email protected]
Abstract. The research on the efficacy of intelligent clothing has been a hot topic at home and abroad. This topic by sifting through a variety of common different underwear fabrics, including cotton, blended and, polyester fabrics, such as, vertical combustion apparatus using cone calorimeter, air permeability tester, moisture permeability test instrument to study the existing fire underwear market in flame retardant, comfortable, such as performance, compare the difference of different fabrics, it can be determined based on the intelligent fire protection clothing of the polyester fabric carrier to make intelligent fire underwear of ecg monitoring function, the flame retardant performance due to the other two, at the same time comfortable properties such as insulation moisture permeability difference. 100% spandex and 15%–18% per square meter of silver plating are selected as electrodes to realize the long-term monitoring of human heart rate and electrocardiogram image by means of machine stitching. Keywords: Flame retardant performance Comfort performance electrode Electrocardiogram Heart rate
Fabric
1 Introduction The current fire suit in order to enhance the fire prevention function tends to adopt multi-layer design that increased the weight of the fire service, but the fire underwear wearing comfort and its thermal insulation properties, such as flame retardant research is not deep [1–4]. Fireman instal lation positioning device can be used for real-time monitoring of the location, but heavy fire coat doesn’t fit the characteristics of the human body determines its ecg monitoring equipment installed in the fire protection coat so as to achieve the status of the fireman real-time monitoring of vital signs, so in order to achieve the purpose of real-time ecg monitor firefighters, this research take the development and application of intelligent fire underwear. Stops in smart clothing on the areas on domestic on monitoring the patient’s physical condition, based on two kinds of wearable and non-contact smart clothing monitoring ecg type, to suppress noise interference, reduce the amplitude of electrocardiogram (ecg) like drift, wearable contact smart clothing is superior to monitoring type contactless smart clothing, and in the subsequent intelligent fire underwear © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1378–1384, 2021. https://doi.org/10.1007/978-981-33-4572-0_199
Preparation and Function of Intelligent Fire Protection Underwear
1379
production will be made electrode conductive fabrics to achieve the dynamic ecg monitoring [5–7].
2 Experimental Apparatus and Methods 2.1
Experimental Apparatus
(Table 1). Table 1. Main of experimental apparatus Number 1 2 3 4 5
2.2
Name Test performance Cone calorimeter Fabric flame retardant performance The vertical combustion apparatus Smoldering combustion time Fx - 3300 air permeability tester Fabric air permeability rate Dh - 450 moisture permeability test device The fabric moisture permeability Wearable ecg recorder Ecg signal transmission
Methods
Generally speaking, temperature requirements for workshop process in textile mill is relatively wide, and the relative humidity requirements is more stringent, such as in winter and summer, workshop temperature can vary in 10 °C, as long as the labor protection requirements can be satisfied. Here is such a case, the workshop temperature target is within the range of 20*30 °C, and the relative humidity target is 60 ± 5%, in this situation, here the given workshop has only residual heat and no residual wet(in fact, most textile mills are in such a state), air state delivered into the workshop is changed according to the constant enthalpy-humidity-ratio line (i.e. e = ∞), fluctuation of temperature and humidity of workshop throughout the year, it is along the 60% relative humidity line, as shown in Fig. 1 (i.e. enthalpy humidity chart). 2.2.1
Flame Retardant Performance Test
(1) Textile mill air conditioning system consists of cone calorimeter To determine polymer cone calorimeter heat release rate (hrr), total heat release rate (thr), mass loss rate (mlr) provision, the key parameters, such as flame retardant properties can be judged according to the above data: for example, a maximum of hrr for the peak heat release rate (pk - hrr).size of pkhrr material burns the greatest degree of heat release.hrr and pk - the greater the hrr, the greater the wealth material put calorie burn, the greater the fire hazard of sample experiment 2 times, a total of 6 times to experiment on [8].
1380
F. He et al.
2.2.2 The Vertical Combustion Apparatus Using vertical burning test instrument (gb/t 5455–2014) textile products damaged length, smoldering combustion performance vertical direction time, determination of burning time”, according to the national standard for sample pretreatment, sample is obtained by calculating the average permeability rate [9, 10]. R¼
qv 100% ðmm=sÞ A
ð1Þ
2.2.3 Dh - 450 Moisture Permeability Test Device According to gb/t 12704.1 2009 the textile fabric moisture permeability test method for part 1: the combination in the constant temperature and humidity box, balance out after 1 h, cover the lid in the silica gel dryer balance for 30 min, weighing mass m1.moisture vapor transmission cover, then remove the moisture vapor transmission cup into the constant temperature and humidity box, again will again after 1 h and balance, balance to cover the lid and put it in the dryer for 30 min, weighing mass m2, will get twice the m2 - m1 weighing the difference of moisture permeability is calculated by the type. WVT ¼
DM AT
Type: moisture permeability, wvt - g/h. (m2) a - effective test area, the m2. T - test time, h
3 Results and Discussion 3.1
Vertical Burning Instrument Analysis
Fig. 1. Fabric vertical flame burning time average
ð2Þ
Preparation and Function of Intelligent Fire Protection Underwear
1381
Through data analysis and the experimental phenomena can get the following conclusion: the three groups of samples in the combustion process with different phenomena: will continue to burn in the three groups sample time, damaged length, respectively, and the national fabric flame retardant grade comparison is shown in Fig. 2, flame retardant grade than first class and second class standard, only 1 sample close to the flame retardant secondary standard, 1.100% polyester flame retardant performance due to the 2.100% cotton, 3.65% cotton 35% pet. 3.2
Cone Calorimetry Analysis
In order to better test 3 groups of sample of the flame retardant performance, testing sample using cone calorimeter. Can cone calorimeter under laboratory conditions scientific evaluation material in actual fire burning, flame retardancy and smoke suppression, thus more accurate characterization of the combustion performance. This experiment under the condition of radiation is 35 kw/m2, determination of lit time (tti), heat release rate (hrr), the peak heat release rate (pk-hrr), arrived at the peak heat release rate (tpk-hrr), total time needed for combustion heat release (thr), mass loss rate (mlr) provision, the rate of maximum mass loss rate (pk-mlr) provision, to the time needed for maximum mass loss rate rate (tpk-mlr) provision, effective heat of combustion (ehc), total heat release quantity (pk-thr), detailed data are as follow Table 2, Table 3. Table 2. Ignite the arrival time and at the peak time Number 100% pet 100% cotton 65% cotton 35% pet
Tti(s) Tpk-hrr(s) Tpk-mlr(s) 51 125 181 12 13 14 11 31 17
Table 3. Cone test results Number 100% pet 100% cotton 65% cotton 35% pet
Pk-hrr (kw/m2) 203.59 207.35 252.94
Av-hrr (kw/m2) 58.56 68.43 67.92
Pk-mlr (g/s) Pk-co (kg/kg) 0.12 55.006 0.59 0.002 0.59 0.158
1382
F. He et al.
Fig. 2. The line of cone calorimetry
Through the analysis of Table 2, get the following conclusion 2 sample composition contains a lot of cotton and ignition time is shorter, compared with the sample 2, 3, 1 sample with 51 s ignition time, with a certain initial flame retardant ability. Mass loss rate refers to the burning sample quality over time and the rate of change in the process of combustion, it reflects the material under certain fire intensity of thermal cracking, volatile and combustion degree, 1 sample to arrive the time needed for maximum mass loss rate rate is 181 s are much slower than sample 2 and sample 3, and 1 sample quality loss is relatively stable in the combustion process, thermal cracking, volatile and combustion degree less than sample 2, 3. 3.3
Fabric Comfort Analysis
Will three groups of sample according to gb/t 6529-2008 textile wet and experiment with standard atmosphere for 2 h later, wet into temperature of 21.7 °C, the constant temperature and humidity of 59% humidity measure, the use of fx 3300 porosity permeability testing machine; at the same time, according to gb/t 12704.1-2009 “textile fabric moisture permeability test method of national standard will be 3 set of samples in the temperature 38 °C, 90% relative humidity of dh - 450 type sample permeability testing device of moisture permeability test, two kinds of testing data as below. Table 4. Porosity permeability Number 1–1 1–2 1–3 2–1 2–2 2–3 3–1 3–2 3–3
Mass1 (g) 195.645 196.339 194.807 195.187 196.152 195.761 195.981 195.075 195.639
Mass2 (g) 196.266 196.975 195.416 195.739 196.698 196.375 196.578 195.656 196.209
Porosity permeability (g/(m2h)) 219.435 224.735 215.194 195.053 192.933 216.961 210.954 205.3 204.13
Preparation and Function of Intelligent Fire Protection Underwear
1383
Table 4 can be seen on the moisture permeability samples using 3 > 1 > 2, through the use of fx - 3300 air permeability tester and the moisture permeability compared to draw diagrams measured before 4 to 6, after analysis found that has a lower thickness and gram sample 1 rely on larger pores on the breathable moisture permeability performance is better than sample 2, 3, given the appropriate intelligent fire underwear need good comfortable performance so in the choice of the 3 sets of sample and comfortable performance of priority selection of sample 1. According to the testing data obtained above flame retardant performance 1.100% pet > 3.65% cotton 35% pet > 2.100% cotton, breathable, moisture permeability samples 1.100% pet > 3.65% cotton 35% pet > 2.100% cotton.
4 Conclusion Since the research of flame retardant fabric in china is relatively late compared with that in china, the protection of firefighters is limited to the flame retardant ability of fire clothing and the positioning and sound devices carried along with the sound, so there is not much real-time detection of the physical condition of firefighters. In this experiment, we completed the selection of intelligent fire underwear carrier through the detection of flame retardant performance, air permeability and moisture permeability. The selection of the fabric electrode was completed by testing the resistance of the conductive fabric. 100% pet thickness is 0.6 mm corresponding moisture permeability is 219.788 g/m2 (h.), thickness of 0.728 mm sample 2 corresponding moisture permeability is 201.649 g/m2 (h.), sample 3 corresponding moisture permeability of the thickness of 0.616 mm is 205.889 g/h. (m2) i found that with the increase of the thickness of the specimen, moisture permeability also has a certain amount of sample. Acknowledgments. This research was supported by open project of key laboratory (2017zk02), and beijing institute of fashion and technology college of special plan young talent project (biftqg201909).
References 1. Chen, Y.: The Development of a Single Lead Electrocardiogram Machine Learning Machine, 04–17 (2010). ISSN:1672-8270 2. Song, J.: Fabric Used for Ecg Signal Acquisition Electrode Technology, 10–14 (2015). ISSN:1000-9787 3. Guan, G.: High Permeability Fire Underwear Comfort Test and Evaluation, 59–63 (2016). ISSN:1001-2044 4. Tang, S.: New firefighters with dry and comfortable underwear. China’s Lab. Prot. Supplies 3, 201–209 (2018) 5. Textile Combustion Performance Test Oxygen Index Method, GB/T 5454–1997 6. Li, L.: The Thermal Wet Comfort of Firefighter Protective Clothing Fabrics. China’s Textile Press, Beijing (2015)
1384
F. He et al.
7. Chi, Z.: Fabric electrodes for ecg signal acquisition technology is reviewed. J. Biomed. Eng. 35(10), 15–20 (2018) 8. Thomton, W.: The relation of oxygen to the heat of combustion of organic conbustion of organic compounds. Philos. Mag. J. Sci. 33(196), 28–37 (1917) 9. Textile damaged length, smoldering combustion performance vertical direction and the determination of burning time. GB/T 5455–2014 10. Textile wet and experiment with standard atmosphere. GB/T 6529–2008
The Design and Implementation of Corpus Labeling System for Terminology Identification in Fishery Field Yawei Li1, Xin Jiang1, Jusheng Liu1, and Sijia Zhang1,2(&) 1
School of Information Science and Engineering, Dalian Ocean University, Dalian 116023, China [email protected] 2 Key Laboratory of Environment Controlled Aquaculture, Ministry of Education, Dalian 116023, China
Abstract. Term recognition is the basic work in the field of natural fishery treatment. Now, the term recognition needs to be realized by machine learning algorithm, which requires a large number of corpus to train the model of the algorithm. At present, most corpus labeling needs manual labeling method, and the quality and speed of corpus labeling are problems to be solved in this field. Therefore, it is urgent to have a platform to assist in the evaluation of corpus labeling quality, so as to make the article labeling more efficient and accurate. Keywords: Semantic annotation Artificial intelligence Information retrieval Fishery terminology Natural language processing
1 Introduction Natural language is a tool for humans to communicate and express emotions. If they want to realize human-computer interaction in a true sense, it is necessary for the computer to understand the meaning that humans want to express, and further analyze and process it. In recent years, due to the rapid development of Internet technology, it has promoted information processing technology to a large extent, and has also continuously put forward new demands for information processing technology. The current natural language processing forms include text analysis, word dictionaries, and annotation of names, etc. Although there are many forms, the processing is more difficult and less researched, and progress is slow. The processing of natural language needs to be further developed [1, 2]. At present, most corpus annotation needs manual annotation methods, and the quality and speed of corpus annotation are problems to be solved in this field. Therefore, it is urgent to have a platform to assist in the evaluation of corpus annotation quality, so as to make the article annotation more efficient and accurate [3, 4]. Therefore, the corpus annotation based on the fishery field is studied in this paper. The terminology in the fishery field is annotated manually, and then the natural language of human beings is input into the computer in combination with the digital language of the machine, so that the computer can understand the human language and what it wants to express through the combination of the manual © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1385–1390, 2021. https://doi.org/10.1007/978-981-33-4572-0_200
1386
Y. Li et al.
annotation article and the data model in advance. In this way, human beings do not need to actively learn difficult and boring machine languages, and machines can also understand human thoughts, thus realizing the interaction between human beings and computers [5–7].
2 The Key Technology 2.1
Eclipse
Eclipse is a system development platform that can open its source code development software and can be widely operated. It builds a development environment through plug-in components. The main thing is to bring a large set of component clusters with a set of standardized and sustainable operation. Because of its open source code and its powerful development space, it has been praised by users for a while, and users are also happy to develop some related sub-products for them. Myeclipse is one of the more excellent products developed by its users. Due to the extensive open source concept, it is gradually improving and approaching maturity, and is more suitable for the majority of users. 2.2
The JSP Technology
JSP was developed by SUN. On the one hand, its functions are combined with traditional HTML code; on the other hand, it can also be expanded on the basis of the original development. Through the implementation of JSP script code, the design of the page can be realized by the developer, which can be produced, converted, compiled, and the code can also be executed. And by virtue of its cross-platform characteristics, it can be executed on a variety of operating platforms. Users only need to complete a large number of work service requirements through fewer operations, which meets the needs of the client. At the same time, Jsp reduces the browser’s need for webpage technology, so that the browser can run dynamic webpages without complicated conditional support, reducing the browser’s Internet access requirements [8–11]. 2.3
MySQL
Mysql database is an open source relational Dbms (DataBase Management System) software, developed by the Swedish commercial company Mysql AB. It is one of the more popular databases in website construction at present. Due to its small size, fast speed, and overall ownership the low cost, especially the characteristics of open source, make many small and medium-sized enterprises regard Mysql as the first choice of company database products.
The Design and Implementation of Corpus Labeling System
1387
3 Design of the System 3.1
User Demand Analysis
The analysis of user requirements of this system includes the following three aspects: (1) Label users: Users can enter the system to query the system for rule explanation, then they can conduct self-test to test their labeling ability, and the system will give corresponding scores. Users who pass the test can conduct formal article labeling, and will obtain corresponding bonus points for user exchange. (2) Get users: Users who obtain articles can find the marked articles in the system, obtain them and make use of them. (3) System administrator: System administrator user management, rule real-time update management, annotation article update publishing management, points and reward exchange management. 3.2
Requirements Analysis
The system must have the following functions and characteristics because it needs to meet the user’s test, user’s labeling and real-time update by the administrator: (1) User test ① Rule Description: In the rule description, the system & apos;s use rules, labeling rules, integration rules, and problems to be noted in the use process will be introduced in detail, and the user can return to view the specific rules at any time in the use process. ② Test: Before users annotate, each user needs to conduct a test to check whether users have the ability to annotate, and will rate users. Users with higher ratings have certain advantages when they actually annotate. Every user has multiple opportunities to test, and can contact repeatedly until the test is formally marked. ③ Scoring System: After each user marks, the system will score and convert the scores into corresponding points for exchange of meeting points. (2) Text tagging ① Article Tagging: After the user passes the test, the user will get the opportunity of formal tagging. Because the user has different understanding of different articles, the user can select the articles before tagging, and then carry out formal tagging after selection. After tagging, the system will give corresponding score. ② Reward: The user will get points every time they mark, and the points can be exchanged for rewards. (3) System management ① User Management: The system administrator will manage users, including user login, registration, password replacement and other issues. ② RuleManagement: The administrator needs to modify the rules in time to meet the needs of users. Since the articles will be continuously replaced during the labeling process, the labeling rules also need to be modified accordingly. ③ Annotation data management: In the process of annotating articles by users, the degree and number of different articles are different, so the generated data needs to be updated in real time during the annotation process. 3.3
System Business Process Analysis
The functional diagram of this system is shown in Fig. 1.
1388
Y. Li et al.
Fig. 1. System function diagram
Ordinary users enter the system to register and log in first, and then enter the system after completing the registration. Users can check the announcement first. The announcement will prompt some specific rules for labeling articles, such as the specific rules for labeling and the matters needing attention when using the system. Then users can check the templates that have been labeled and conduct self-tests. When the test passes, users can label official articles. At the same time, users can also check their points in the points exchange and exchange points for rewards. The system administrator can maintain the notice and announcement management of the system, issue an announcement to all users to explain the latest rules, and also issue a notice to a single user to inform the user of the recent personal activity and the progress marked by the article. The administrators can also manage registered users, allocate system resources, and assign different system permissions to different users, and administrators can track the operation of each user to prevent users from violating the rules.
4 The Operation Effect The system uses MySQL as the database platform, Eclipse as the development tool, and JAVA as the programming language. The system has basically realized the expected functions, friendly interface, and easy operation, which allows users to operate conveniently. Figure 2 is the login page and article annotation page of the system.
The Design and Implementation of Corpus Labeling System
1389
Fig. 2. Article Annotation
5 Conclusion Because the quality and speed of manual corpus annotation are very slow. Based on this situation, we designed this system and selected the terminology corpus in the fishery field to identify and label. The experiment has been continuously improved to make the system more complete and achieve the expected results. This system basically realizes specific functions from user management to corpus labeling to corpus release, but some system reward redemption modules still have deficiencies. We will also continue to adhere to and study the corpus labeling in the fishery field to make corpus labeling faster and more standardized. Contribute to the future development of artificial intelligence. Acknowledgments. This research was supported by 2019 Dalian Ocean University Students’ Innovation and entrepreneurship training program (No. 201910158047), Doctoral Startup Fund of Dalian Ocean University (No.HDYJ201818) and Doctoral Scientific Research Foundation of Liaoning Province (2019-BS-031).
References 1. Lixia, W., Xiaoyong, H.: Chinese text keyword extraction algorithm based on semantics. Comput. Eng. (01) (2012) 2. Jingyue, L., Peifeng, L., Qiaoming, Z.: An improved keyword extraction method for TFIDF web page. Comput. Appl. Softw. (05) (2011) 3. Jinke, L.: Research on metaphor recognition based on machine learning algorithm. Nanjing Normal University (2011) 4. Jinfeng, Y., Yi, G., Bin, H., Chunyan, Q., Qiubin, Y., Yaxin, L., Yongjie, Z.: Construction of Chinese EMR named entity and entity relationship corpus. J. Softw. (11) (2016)
1390
Y. Li et al.
5. Chengjin, L., Chong, G., Wenjie, Z.: Research on content validity of research hotspot of common word analysis and recognition: based on natural language processing. Books Inform. (1), 8–14 (2018) 6. Baishui.: What is NLP. Chin. Constr. (2), 37–37 (1992) 7. Zhijun, W.: Computer analysis and understanding of natural language. Lang. Res. 1, 1–14 (1985) 8. Shuhua, X.: Application research of JSP and ASP technology in web design. Netw. Secur. Appl. 11, 48 (2018) 9. Zhanyu, X.: Java programming language and practical application of computer software development. Electron. Technol. Softw. Eng. 09, 44 (2019) 10. Ziyun, D.: JSP Network Programming from Foundation to Practice. Electronic Industry Press, Beijing (2009) 11. Todd cook. JSP from Introduction to Mastery. Electronic Industry Press, Beijing (2003)
Research on Data Ethics Based on Big Data Technology Business Application Hongzhen Lin(&) and Qian Xu School of Hengda Management, Wuhan University of Science and Technology, Wuhan, China [email protected]
Abstract. Aim of this study is to explore data ethics of big data technology business application. Using the method of literature research deeply, correctly understand and analyze the problem of data ethics of big data technology. The ethical problems brought by big data technology mainly include information security, privacy disclosure and data gap. The main reason for data ethics is that, from the perspective of data subjects, it is that the awareness of rights of data subjects is not strong; from the perspective of data related enterprises, the main reason is that data collection is driven by interests to infringe other people’s data rights; from the macro level, it is the lag of relevant legislation and weak supervision. The conclusion is that effective measures must be taken to strengthen the construction of data ethics to promote the sustainable development of data business applications. The government should improve the relevant legislation and the supervision of data. The collection and application of data should follow the principle of being informed and harmless. Technological innovation and control should be strengthened. The self-discipline construction of big data industry is strengthened. Keywords: Big data technology Data ethics Data gap Information security
1 Introduction Nowadays, the application of data technology is really convenient for People’s lives, and data technology improves the efficiency of social governance. Big data application is the process of mining useful information for users from big data. The use of big data generally has the characteristics of large data scale, multiple data types and rapid generation, and provides users with auxiliary decision-making by means of data analysis, and finally realizes the value of big data [1]. Big data technology has great significance in dealing with highly complex events, providing better products and services for people, and finding the best solution from many solutions, etc., because it has mastered a large amount of data information. However, there are also some ethical problems in big data technology, such as infringing the privacy of citizens, information security loopholes, and wanton trading of data and information. When people use the Internet, they have to log in to the device, which will generate all kinds of data on the device, and these data will be collected and stored by some companies, which will form a huge data flow. With these data flows, companies can analyze personal identity of © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1391–1396, 2021. https://doi.org/10.1007/978-981-33-4572-0_201
1392
H. Lin and Q. Xu
users, behavior preferences, etc., so as to push various commodities to people. The business value of data flow is shown. Big data is the development trend of information science and technology in the new era. Big data has a large scale and rapid development, which has a positive impact on many industries at this stage. Therefore, the big data platform is still developing and improving, and the scope of application is also expanding. Although it can bring great benefits to social development and economic construction, there are still a series of problems in the face of the increase of the number of big data [2]. Using big data technology, we can discover new knowledge and improve new ability. Big data has a profound impact on people’s life and work, and has also changed people’s way of thinking. However, it also needs cold thinking in the big data, especially the correct understanding and response to the ethical issues brought by big data technology, in order to better pursue benefits and avoid disadvantages. It is necessary to analyze the factors that affect data security in big data environment [3].
2 Ethical Problems Brought by Big Data Technology 2.1
Information Security Issues
In today’s life and work, the network is widely used, and individuals actively or passively generate a large amount of data. For the generation of data, the right to store, use, delete and the most important right to know should belong to the right of individuals. But in real life, due to a variety of reasons, the data security of data subject is difficult to be guaranteed. For example, some information technology itself is not qualified, there are large security vulnerabilities, in this case, it is likely to lead to a series of problems such as user data disclosure, or forgery, thus affecting the user’s information security. In addition, due to the unclear rights and responsibilities of the use of big data, the anomie and misleading of some data enterprises in the use of big data, the social responsibility of relevant information products, and the use of data information for high-tech criminal activities are also ethical issues derived from information security issues. 2.2
Personal Privacy Security Issues
With the wide use of network technology, the development of technology makes the era of big data cut into many aspects of production and life. All walks of life use big data technology in the trend of the times, otherwise, it may be eliminated by competitors in the same industry due to backward technology. Big data technology has been transported to many industries and fields, and its important value is increasingly apparent. All walks of life have also recognized the important value of data [4]. The most important value brought by big data is based on large-scale data collection. It is difficult to accurately grasp whether these data are allowed to be collected. The information that is not allowed to be collected usually involves personal privacy information. The biggest ethical crisis brought by big data is the violation of personal privacy [5]. Big data, especially the current precise information data, is based on the record of a large amount of data for a single person. It is a continuous and secret
Research on Data Ethics Based on Big Data Technology Business Application
1393
monitoring record of big data for people. It is a kind of storage of information and data, and the public unconsciously accepts the monitoring in their lives. There is no great antipathy and aversion to the monitoring. Traditional devices and tools cannot meet the application requirements of big data in capability and analysis technology. The security risk of big data based on cloud comes from the unauthorized operation of unauthorized files and content by cloud computing service providers and remote data applications [6]. 2.3
New Data Gap Issues
Due to the different technical level of each person using the network and the difference in value economic conditions, some people can not even own a computer, which leads to different people’s ability to obtain and use big data resources in the era of big data. The gap in the ability to use data intensifies group differences and social contradictions. When these social attributes are collected, person identity will be defined in the form of data. However, there is still a big gap between personal information and people themselves after data processing. There is distortion in the process of “data identity” reflecting people. The formation of data alienation will make us regard data as a kind of belief, blind and unconditional trust data. People are likely to gradually lose the ability of self-judgment, and let “data decision” make choices for themselves [7]. In the face of massive data, people are easily surrounded by the illusory freedom created by data. Human beings should let technology return to serve people, let people return to their own life, and return to life itself is the fundamental to get rid of technological alienation [8]. There are many reasons for people’s differences in digital utilization in the era of big data. At the technical level, it is mainly due to the unbalanced development of software and hardware of big data technology; in terms of legal system, it is caused by the lag of ethical system and norms; at the individual level, it is mainly caused by the difference of individual ability. At the same time, it is an important reason for the reality of digital divide [9].
3 The Analysis of the Causes of Data Ethics 3.1
Data User Without Complying with Principle of Informed Consent
The related application of big data is an important method used in data analysis, mining very effective related information from big data, providing users with relevant auxiliary decision-making, and realizing the process of big data value [10]. The low moral quality of individuals or enterprises and the blind pursuit of self-interest will lead to the abuse of big data technology. In the long run, these enterprises only pursue their own temporary interests in the application of new technology, which violates the morality and the law, and infringes the legitimate rights and interests of users [11]. The interest tendency of market economy makes people only pay attention to the immediate interests and local interests brought by big data. Because of the characteristics of big data technology, the realization of interests often has relations with data producers, even at the cost of infringing the personal interests of data producers. Both individuals
1394
H. Lin and Q. Xu
and enterprises start from their own interests and do not hesitate to damage the interests of society and others for their own interests, which may lead to extreme egoism. Driven by interests, enterprises take various means to collect and obtain personal information, ignore privacy rights, buy and sell and share personal privacy information wantonly, etc. [12]. 3.2
The Anonymity of Data Easily Breeding Ethical Problems
The anonymity and concealment of data make lawbreakers more rampant, causing frequent ethical problems and even crimes. Compared with the traditional cloud services in the past, mobile cloud services have more advantages. For example, mobile Internet technology makes terminal applications flexible and easy to obtain various data. At the same time, it can not be ignored that it brings more problems, such as privacy leakage and information security [13]. In fact, the network society and the real society are subject to legal and moral constraints. For example, when the real name system is not implemented on the network platform, some netizen think that they can hide their identity under the condition of weak legal and moral awareness and are more likely to indulge their words and deeds, and do harm to others and society.
4 Countermeasures to Strengthen Construction of Data Ethics 4.1
Improving Relevant Legislation and Supervision of Data
Government legislation should aim at the ethical problems caused by big data technology and establish corresponding ethical principles suitable for practical needs. The first is the principle of the unity of rights and responsibilities. Whoever uses the data should be held responsible. The second is the principle of harmlessness. The principle of harmlessness requires that people should not cause any harm to the interests of others when using big data technology. The general fundamental requirement of the development and application of big data technology is to adhere to people-oriented and serve the well-being of the whole human society. It is necessary to re-establish or improve the relevant legislation to make the data collection, use, storage and other links have the corresponding legal norms to restrict the data behavior [14]. 4.2
Following Principles of Informed Consent and Harmlessness
In the application of big data technology, we should adhere to the principle of harmlessness. With the increasing popularity of cloud computing technology, public cloud storage services have been widely used. The users of public cloud storage applications are growing rapidly. Users are increasingly concerned about the privacy, integrity and controllable sharing of their data stored in the cloud [15]. In the era of big data, any group should consider the impact on others when carrying out relevant mining activities. As a pioneer and practitioner of big data application, Internet enterprises should establish an effective and comprehensive self-discipline mechanism, adhere to the
Research on Data Ethics Based on Big Data Technology Business Application
1395
principle of self-discipline and not harming the interests of others. Secondly, in the application of big data technology, The principle of balancing the interests of all parties should be adhered to. The development of science and technology should have hoped to achieve equality and democracy. The characteristics of data sources and privacy security risks of wearable devices should also be paid attention to [16]. 4.3
Strengthening Technological Innovation and Control
There are many solutions to the ethical problems brought about by big data technology. The path from technology is to promote technological progress. Through the design of the technology itself to solve the problem of privacy protection and information security. The state should introduce policies to encourage the elimination of the adverse effects of big data technology through technological progress. For example, for personal identity and sensitive information, technical means, such as data encryption upgrade and authentication protection technology, are used. This requires the developers of data technology to incorporate the privacy protection and information security of users into the technical development procedures and technical standards.
5 Conclusion If we want the long-term development of data technology, we should build it on the height of human nature and morality. The development of data not only shows a certain new civilization, but also needs to be alert to the ethical challenges brought to us at any time. When people and data coexist and support each other, it is necessary to stick to the main position of people in the application of people and big data. Human beings are making progress in the social cycle. To develop data technology, we need to respect ourselves and technology. The harmonious development of data and people, people and technology will usher in a new data civilization world. At this stage, the maintenance of data technology and the rights of data subjects should not only be in the technical and legal provisions, but also give full play to the guiding role of data ethics, and call on the public and relevant departments to do a good job in data supervision and guidance, so as to deal with the development crisis of data technology at any time. With the development of data technology, data users and data actual controllers should not only use the technology in accordance with the relevant laws and regulations of big data at this stage, but also pay attention to data rights in the development process of data technology, follow the basic data ethics, maintain the legitimacy of data rights of data subjects, truly realize the sound development of data technology, and strive to build a transparent and public open and shared data era. Generally speaking, a good sense of data ethics has been established in a good social atmosphere, which will gradually become the consciousness of the behavior subject. In the face of new ethical issues, the behavior subject will set its own code of conduct, consciously abide by the data ethics and ethics, so as to effectively reduce the generation of information ethics crimes.
1396
H. Lin and Q. Xu
Acknowledgments. This research was supported by the National Social Science Foundation of China Name of the project: “Entrepreneurial law education research based on risk control in science and Engineering University” (Grant No. BIA170192).
References 1. Yin, Z., Min, C., Xiaofei, L.: Current situation and prospect of big data application. Comput. Res. Dev. 50(S2), 216–233 (2013). (in Chinese) 2. Fang, Y.N.: Current situation and prospect of big data application. Digit. World 12, 199–199 (2017). (in Chinese) 3. Hongyang, L.: Data security research in big data environment. Electron. Technol. Softw. Eng. 20, 250–250 (2013). (in Chinese) 4. Jie, B.: Big data application. Inform. Secur. Commun. Secur. 10, 17–20 (2013). (in Chinese) 5. Xue, F., Hongbing, C.: Research on big data privacy ethics. Dialectics Nat. Res. 2, 44–48 (2015). (in Chinese) 6. Zhi, Y., Jing, Z.: Big data application mode and security risk analysis. Comput. Modernization 4(8), 58–61 (2014). (in Chinese) 7. Weiping, S.: On the new alienation of human beings in the information age. Philos. Res. 7, 113–119 (2010). (in Chinese) 8. Tianyong, Z.: The trend of technological alienation and modernity: heidegger and baudrillard & apos;s perspective. Philos. Sci. Technol. Res. 2015(2), 63–67 (2017). (in Chinese) 9. Shiwei, C.: Ethical governance of digital divide in the era of big data. Innovation 03, 15–22 (2018). (in Chinese) 10. Ruiling, L.: Discussion on the application status and development trend of big data. Sci. Technol. Outlook 15, 16 (2017). (in Chinese) 11. Kailin, T., Shiyue, L.: Research on big data privacy ethics. Ethics Res. 06, 102–106 (2016). (in Chinese) 12. Sheng, G.: Reflection on the ethical issues of big data technology. Sci. Technol. Commun. 10, 4–7 (2018). (in Chinese) 13. Ruixuan, L., Xinhua, D., Xiwu, G.: Data security and privacy protection of mobile cloud services. J. Commun. 12, 162–170 (2013). (in Chinese) 14. Zhenchao, S., Jie, H.: Ethical anomie, causes and countermeasures of network information in the context of big data. Theor. Reform 02, 43–43 (2015). (in Chinese) 15. Hui, L., Wenhai, S., Fenghua, L.: Overview of data security and privacy protection technologies for public cloud storage services. Comput. Res. Dev. 7, 17–29 (2014). (in Chinese) 16. Qiang, L., Tong, L., Yang, Y.: Overview of data security and privacy protection technologies for wearable devices. Comput. Res. Dev. 1, 14–29 (2018). (in Chinese)
Integrating Machine Translation with Human Translation in the Age of Artificial Intelligence: Challenges and Opportunities Kai Jiang1 and Xi Lu2(&) 1
College of Foreign Languages, Huazhong Agricultural University, Wuhan, China 2 Department of Common Required Courses, Hubei Institute of Fine Arts, Wuhan, China [email protected]
Abstract. Driven by the rise of new technologies such as big data, Internet of Things, and voice recognition, artificial intelligence has accelerated its progress in deep learning and human-machine collaboration. The advance in artificial intelligence has brought significant changes to the society. In the field of translation, the rapid emergence of internet translation tools poses potential challenges to professional translators. Meanwhile, it also brings unprecedented opportunities to the industry. This paper first reviews and summarizes the development of machine translation in China. Then, it analyzes the potential impact of artificial intelligence on the translation industry, and the competition and complementarity between machine translation and human translation. At last, the paper discusses the integration of machine translation with human translation, and visualizes the future prospect of translation technology, hoping to give reference to translators and translation researchers. Keywords: Machine translation Translation technology
Human translation Artificial intelligence
1 Introduction Currently translation includes human translation and machine translation. Machine translation is divided into online translation and offline translation [8]. Machine translation is an auxiliary tool for human translation, which can effectively improve the efficiency of translation work. Though machine translation can assist human, it is impossible to take the place of human. Machine translation is a tool for human translation, which can effectively reduce the workload of translators, improve work efficiency, and optimize translation quality [2]. Human translation is currently found in such forms as written translation, simultaneous interpretation, consecutive interpretation, etc. Today machine translation has achieved impressive results under the support of internet technology. However, due to the limitations of artificial intelligence, machine translation needs to be integrated with human translation in order to achieve better results. Therefore, studying the relationship between machine translation and human translation has practical significance. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1397–1405, 2021. https://doi.org/10.1007/978-981-33-4572-0_202
1398
K. Jiang and X. Lu
2 Development of Machine Translation in China Machine translation mainly uses computer information technology to translate the language of other countries into the target language [5]. Its operating principles cover many fields, such as information theory, linguistics, mathematical logic, etc. With the rapid development of the Internet, the world’s geographic restrictions have been removed, and machine translation now plays an increasingly obvious role in political, economic, and cultural aspects. As the rise of artificial intelligence, machine translation has been widely used in people’s daily lives. Today, it not only processes text information, but has gradually become more intelligent, for instance, online image recognition, online voice recognition (translation), etc. Artificial intelligence translation originated in the 1930s, while computers were not widely used at that time. Artificial intelligence translation was developed with the advancement of computer science and technology. The first researcher who proposed the application of computer technology in translation was the French engineer G. B. Artsouni [1]. At that time, word-for-word translation was mainly achieved through a simple mechanical device, that is, vocabulary conversion performed through a dictionary. With the application of computers, in 1947 Professor Warren proposed to use computers to aid the translation work. By then, the development of artificial intelligence translation was relatively slow. Since the 21st century, artificial intelligence based on machine learning has thrived, and many well-known Internet companies are committed to developing artificial intelligence translation. Some well-known companies include Google, NetEase, Baidu, and Microsoft [3]. In 2011, the International Business Machines Corporation (IBM) launched the Watson system, which has made tremendous progress and achievements in machine translation. Subsequently, in 2018 Microsoft announced that its artificial intelligence translation has reached the level of professional human translators. In the same year, Google launched Google Assistant, which can talk with human and help complete simple tasks. This proves that the language comprehension ability of artificial intelligence has made a great breakthrough. In recent years, China has made great efforts in developing artificial intelligence translation, including the establishment of scientific research labs, information platforms, language processing, and information networks. At present, artificial intelligence translation has been widely used in our daily lives. As the rapid development of translation technology, a large number of translation software has emerged, and the versions have been updated continuously. AI translations frequently appear in international conferences, and translation technology teaching is becoming increasingly popular in universities. These have an important impact on translation technology research, and the number of research is increasing accordingly. Chen Shanwei divided the development of translation technology into four periods: 1967–1983 was the budding period, 1984–1992 was a period of steady growth, 1993–2003 was a period of rapid growth, and 2004-now is a period of globalized development (The Future of Translation Technology 1–15). In Google Scholar and China Knowledge Network (CNKI), when using “translation technology”, “machine translation (MT)”, “computeraided translation (CAT)” to search, the floating number of research papers is basically
Integrating Machine Translation with Human Translation in the Age
1399
the same as the phases of translation technology divided by Chen Shanwei. From 1967 to 1983, there were 86 Chinese and English papers; from 1984 to 1992, 254 papers; from 1993 to 2003, 1267 papers; from 2004 to the present, the total number was 12692.
3 The Potential Impact of Artificial Intelligence on Translation 3.1
Positive Impacts
1) Supplement human translation to improve translation efficiency. In most foreign language occasions, such as daily communication, there is no need for exceedingly precise and professional simultaneous interpretation. Moreover, the limited supply of simultaneous interpreters cannot meet the current huge market demand. Artificial intelligence translation tools can be used as a backup workforce to fill the vacancies of translators needed in daily work and life. The artificial intelligence translation tool is equipped with the ability of instant translation, and the speed is no less than that of human translators. While saving time for translators, it provides consumers with convenient services and fully improves the efficiency of translation work. 2) Multiple languages to fully meet the diverse needs. At present, there are limited number of talents in the translation industry who can solidly master multiple languages, and there are even fewer professionals who can engage in smaller language translation. The number of such high-end talents is limited and the salary will be higher than general translators. The online machine translation tools can adequately meet the needs of multilingual translation tasks in daily communication. Its market supply is sufficient and lower prices are available for long-term use. 3.2
Negative Impacts
1) The demand for translators has fallen, causing unemployment concerns. Advances in artificial intelligence technology have spawned alternative machines for manual and mental work, and have triggered widespread unemployment concerns in society. Among these, unemployment in the translation industry is even more common. AI translation tools are used on a large scale in scenarios such as foreign language learning, overseas travel, and SNS apps. Its advantages in terms of performance and price have a strong impact on the demand for human translators, resulting in a significant reduction in the employment opportunities and salaries. Chen Qi, an associate professor at Shanghai International Studies University, embodied the impact of this trend, predicting that in the future, machine translation will dominate the low-end translation market, driving most human translators to face stricter requirements for accuracy, such as medical materials, literature, legal documents and other professional documents in the high-end market. 2) The translation machine is updated rapidly, exerting significant impacts on professional translators. The advance of artificial intelligence and deep learning
1400
K. Jiang and X. Lu
technology enables translation tools to be updated rapidly. In the Internet era, new vocabularies are constantly being coined, and hotspots are spreading fast. For human, cognition of these new ideas requires a gradual process that conforms to the laws of the brain, and can be mastered proficiently through in-depth study and repeated practice. The artificial intelligence computer database can receive realtime information for real-time deep learning, and are able to apply new things in a short time. In terms of the speed of learning new knowledge, human translator is clearly at a disadvantage. In addition, deep learning technology has also accelerated the speed of upgrading the performance of AI translation tools, providing an opportunity to improve the translation quality and increase translation languages. And this sort of improvement requires human to undergo long-term extensive learning and knowledge accumulation.
4 The Relationship Between Machine Translation and Human Translation 4.1
Competition
The advantages of machine translation and human translation can be seen from their literal meaning and actual work. Compared with human translation, machine translation is easy to use, and options of multiple languages are available at any time [4]. As long as there are Internet connection, anything can be translated, bringing great convenience and efficiency. But compared with online translation, human translation is more flexible and targeted, and has a lower error rate. Take simultaneous interpretation as an example. It is a highly complicated interlanguage conversion activity that is strictly restricted by time. Simultaneous interpretation requires the translator to listen to the speech in the source language while activating the existing knowledge in a short time. He quickly predicts, understands, memorizes and converts the source language information, and meanwhile organizes, corrects and expresses the target language information and produces the target language translation. For Simultaneous interpreters, solid language skills, mature conference experience, and extensive knowledge are important prerequisites. And these are impossible to be achieved by online translation tools. In recent years, the development of machine translation has been fast with the support of various algorithms. Some scholars fear that machine translation may replace human translation in the future. However, facts show that there are still many technical difficulties in machine translation that need to be overcome, such as chaotic word order, inaccurate word meaning, and isolated syntax analysis. Therefore, it is impossible for machine translation to completely replace human translation, but its emergence is bound to eliminate some low-end translators. In the process of machine translation, it can help the translator with basic work, and let the translator have time and energy to focus on difficult translation work. Therefore, we must embrace machine translation with a positive attitude, and maintain the spirit of continuous learning.
Integrating Machine Translation with Human Translation in the Age
4.2
1401
Complementarity
1) Complementarity between simple and complicated translation. As machine translation can adequately handle simple sentences, paragraphs, and passages, it can liberate the translator from simple translation work, so that he can have time and energy to deal with complex translation work, such as medical materials, literature, legal documents and other professional documents in the high-end market. 2) Complementarity between previous and present translation work. Since machine translation has certain learning abilities, for example, the self-learning ability based on neural network, it can memorize the high-frequency expressions, technical terms and language segments in its corpus. The next time a similar sentence is encountered, machine translation can convert it instantly and accurately. This effectively avoids problems such as inconsistent terminology and expressions caused by multiple participants in the translation project. Besides achieving the purpose of saving human resources, time can be better devoted to the discussion of new terms and key expressions. 3) Complementarity between different translation scenarios. Because of its fast and convenient features, machine translation can better handle scenarios that have relatively loose requirement for quality and accuracy, such as daily communication, web browsing, and information searching [7]. At present, translation software that is available on the market has basically shifted from written translation to realtime voice translation, which is welcomed by the majority of users. As for professional scenarios, such as high-level meetings, negotiations, litigation, etc., whether it is written translation or interpretation, professional translators are required. For example, in the case of Sun Yang’s international arbitration, because the translator is not competent, they even changed the translator in the court. Another example: during the Two Sessions in China, both the translation of documents and simultaneous interpretation requires professionals who have rich experience in translating the documents of the Party and government.
5 Future Trends of Translation Technology Driven by artificial intelligence, the division of translation work has become increasingly specialized, its social demand continues to grow, and translation technology has gained considerable development, showing the following trends: 1) Specialized. The specialization of translation technology is mainly reflected in the technical tools and domain resources [9]. In a language service company, each translation project has its professional processes and procedures, and different processes and procedures require specialized translation technology. For example, before translating, technical tools such as word count, optical character recognition, file conversion, and file segmentation are used to process files. In addition, the field of language services continues to expand, and the demand for language services in medical care, technology, law, manufacturing, education, insurance, finance, etc.
1402
K. Jiang and X. Lu
continues to increase. In translation practice, machine translation, corpus, terminology database and specialized auxiliary translation tools have been commonly used. 2) Intelligentized. The advance of technology has enabled the translation technology to gradually evolve from machine translation and computer-aided translation that can only process words to more intelligent software that is highly automated, compatible with multiple systems, and deals with multi-format and multi-type translation tasks [9]. Traditional auxiliary translation software and terminology management software are becoming more compatible, and can be used in Windows, MAC and even Android systems. More text formats are accepted, and word processing and annotation functions are becoming increasingly comprehensive. With the rise of voice recognition, visual recognition, speech synthesis, augmented reality (referred to as AR) and other technologies, machine translation is becoming more intelligent. On the market, various machine translation products are now available to consumers, ranging from translation Apps to translation pens and even translation bracelets. 3) Integrated. The 2016 SDL Translation Technology Insight Executive Summary shows that 80% of companies and organizations prefer integrated software (SDL Trados 16). The more time it takes to manually integrate different translation tools, the lower the work efficiency. Therefore, integrating translation software with multiple functions can greatly improve the translation efficiency. At present, assisted translation technology is developing towards the direction of function integration and project process integration. CAT has evolved from the initial stage of basic fuzzy matching and editing functions to automatic text input, spell checking, batch quality assurance, and even instant messaging, project segmentation, project packaging, financial information statistics, process monitoring, language asset management, etc. These functions go beyond the translation itself, and integrate the technologies required in all aspects of the translation process (technical writing, terminology management, document management, content management, product release, etc.), which greatly improves the efficiency of the translation project participants. 4) Cloud-based. Driven by big data, cloud computing, artificial intelligence and other technologies, translation technology has moved from a stand-alone version to network collaboration and the cloud, and shifted from a single PC platform to a cloud-based intelligent terminal [9]. Relying on cloud computing, we can quickly build a customized machine translation system, and achieve cross-system, crossdevice, and no-installation Internet service access. The main purpose of cloudization is to enable various roles involved in translation projects to access the required data sources, including databases, content libraries, mails, websites, file systems, etc. Cloudization can satisfy the needs for unified access and utilization of information from management departments, adjust to the rapid growth of massive data production and query, and solves problems such as multi-person collaboration, remote access, online editing, mobile office, etc. At present, there are many types of cloud-based translation technologies, including online corpus, memory banks, terminology databases, and project management systems.
Integrating Machine Translation with Human Translation in the Age
1403
5) Platform-based. As translation technologies continuously integrate and cloud technology develops, multi-functional translation platform is formed. At present, a variety of collaborative translation platforms have been developed, for instance, LingoTek, MemSource Cloud, SDL GroupShare, XTM Cloud, etc. These platforms achieve the standardization, large-scale and integration of the translation process, as well as global resource allocation, collaboration and sharing. The platform-based translation collaboration has become the main business model of today’s language service providers (abbreviated as LSP). More companies are adopting a combination of internal and external resources to provide more language support in a short time with the help of global community. In addition, translation crowdsourcing platforms (such as Flitto, Trycan, cdfanfan, etc.) and comprehensive translation platforms (such as 99YEE Translation, uTransHub, UEdrive, etc.) can achieve real-time interaction through location-based service and online chatting platform. Relying on the network and big data, these platforms can properly allocate translation projects by automatically matching customer needs with the suitable translators, thus achieving project transactions from online to online and online to offline (mainly refers to interpretation).
6 Machine Translation Post-editing (MTPE) As more companies go global, the machine translation market has shown steady growth in recent years, and its output value is expected to reach 1.5 billion dollars by 2024 [10]. The growing demand from industries such as e-commerce, electronic products, tourism, and hotels has prompted translation companies and language service providers to use fast and cost-effective translation methods. In general, machine translation post editing (MTPE) has gradually become a feasible solution, which bridges the gap between the speed of machine translation and the quality of human translation. MTPE requires human proofreading, editing and revising of the machine translation to guarantee its quality. Therefore, when using machine translation (MT), postediting is absolutely necessary. Before revising the machine translation, the translator needs to compare the MT with the source text and check its accuracy. Therefore, MTPE is the optimal option for large-scale, low-cost projects. The delivery time for these translation projects are tight, but the quality is required to be higher than machine translation. Machine translation can predict the results based on the stored information, thereby improving work efficiency. However, despite the rapid development of artificial intelligence, machine translation still requires human supervision to ensure that certain parameters of translation quality are achieved: faithfulness to the source text (accuracy), text readability, and style. Automation will be the future of machine translation, but there is still a long way to go. The role of translator will change with the emergence of new technology such as machine translation. Experts are continuously devoted to adjusting algorithms to improve the accuracy of machine translation. For companies that are going global, professionals with creativity and extraordinary language proficiency are needed.
1404
K. Jiang and X. Lu
Academia believes that in the future translators are likely to become post-editors for the translation done by machine. Therefore, translators should learn and master the skills of editing and proofreading to enhance the translation quality, style and readability. Due to new technologies and trends, the domain of human service in the translation industry will be extended to content writers, analysts, marketing researchers, etc. Some people may think that translation apps and websites threaten the jobs of professional translators, but in fact, these providers have created more opportunities. The United States Bureau of Labor Statistics predicts that by 2026, the employment growth rate of interpreters and translators will reach 17%. As long as translators adapt to new technologies and are willing to learn new skills to meet the market’s needs, the translation industry will continue to prosper.
7 Conclusion In the era of artificial intelligence, a number of industries have achieved rapid development, which brings great convenience to people’s daily life and work. Machine translation, with the advantage of high efficiency and convenience, can save time, material resources and manpower. However, due to the limitations of machine translation, the quality of its translation in some fields cannot be guaranteed. Although human translation can ensure higher quality, its efficiency is comparatively lower, thus it is unable to cope with the explosive growth of the translation market. Therefore, if the two are combined complementarily, the translation work can achieve both high quality and efficiency. This paper discusses the future relationship between machine translation and human translation based on the current development of machine translation and its constraints. The emergence of machine translation is to better serve human beings. The relationship between machine translation and human translation is not a contradiction or zero-sum game, but a complementarity that is mutually reinforced. Machine translation relieves the burden on human translators and brings convenience to the public. However, the capability and role of machine translation shall not be overemphasized. After all, artificial intelligence is invented by human, and ultimately it cannot replace the human brain. By the same token, it is impossible for future machine translation to completely replace human translation. With the advance of science and technology, the translation industry has gradually become technology-based. Translators and language professionals shall improve their skills and learn to use modern technology to better adapt to the trend of the industry.
References 1. Alcina, A.: Translation technologies: scope, tools and resources. Target: Int. J. Transl. Stud. 1, 80–103 (2008) 2. Bowker, L.: Computer-Aided Translation Technology: A Practical Introduction. University of Ottawa Press, Ottawa (2002)
Integrating Machine Translation with Human Translation in the Age
1405
3. Christensen, T.P., et al.: Mapping translation technology research in translation studies. An introduction to the thematic section. HERMES-J. Lang. Commun. Bus. 56, 7–20 (2017) 4. Cui, Q.: Translation technology and tools of localization service. N. Perspect. Transl. Stud. 7, 194–200 (2015) 5. Hutchins, W.J., Somers, H.L.: An Introduction to Machine Translation. Academic Press, London (1992) 6. Ju, R.: Quality management of online human translation platform—take Y platform as an example. Mod. Commun. 22, 238–239 (2019) 7. Pang, Y.: On the relationship between machine translation and human translation—from the perspective of the development of machine translation and computer-aided translation. Popular Sci. 11, 164–165 (2019) 8. Quah, C.K.: Translation and Technology. Palgrave Macmillan, London (2006) 9. Huashu, W., Zhi, L.: A study on translation technology in the age of artificial intelligence: connotation, classification and trends. Foreign Lang. Cultures 4, 86–95 (2020) 10. Bin, X., et al.: Applications of computer-aided translation: an overview. Shandong Foreign Lang. Teach. J. 4, 79–86 (2007)
Error Bounds for Linear Complementarity Problems of S-SDD Matrix Yan-yan Li(&), Ping Zhou, and Jian-xin Jiang School of Mathematics Wenshan, Wenshan University, Yunnan, China [email protected]
Abstract. Firstly, The upper bound of the infinite norm of the inverse matrix A for S-SDD matrix is given, Secondly, depending on the upper bound, combined ~ ¼ I D þ DA, the error bound of with the newly constructed S-SDD matrix A linear complementarity problem of A is obtained. Keywords: S-SDD matrix
Linear complementarity Error bound
1 Introduction Linear complementarity problem (LCP) has some applications in the fields of mechanics, transportation, economy, finance and control, such as optimal stop problem, option pricing problem, market equilibrium problem, free boundary problem and elastic contact problem [1–5]. When the A matrix in LCP is a positive real matrix (P matrix) with Principal subform, the error bound of the unique solution of the problem can be easily obtained [6]. In 2006, Chen X J and others gave the error bounds of P matrix linear complementarity in literature [7] kx x k1 max n ðI D þ DAÞ1 1 krðxÞk1 d2½0;1
About max n ðI D þ DAÞ1 1 , In recent years, many scholars in the literature d2½0;1
[8–10] about the subclass of P matrix, A large number of research estimates have been made. This paper studies the error bound of S-SDD matrix linear complementarity problem, which is seldom studied in literature.
2 Preparatory Knowledge P ¼N=S, N ¼ f1; 2; ; ng, S ri ðAÞ ¼ aij , j6¼i;j2N P ris ðAÞ ¼ aij ; ri ðAÞ ¼ riS ðAÞ þ riS ðAÞ.
Let
ris ðAÞ ¼
fig j2S=
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1406–1412, 2021. https://doi.org/10.1007/978-981-33-4572-0_203
P aij ,
j2S=fig
Error Bounds for Linear Complementarity Problems of S-SDD Matrix
1407
Let Matrix A ¼ ðaij Þ 2 Cn;n , if (1) aij [ r S ðAÞ (2) aij r S ðAÞ aij r S ðAÞ [ r S ð AÞr S ðAÞ; i
i
j
i
i
Then, A is called S-SDD matrix. Lemma 1. [9] let A ¼ ðaij Þ 2 Cn;n (n 2) be an S-SDD matrix, then has positive diagonal entries W, make AW is SDD matrix. W ¼ diagðw1 ; w2 ; . . .; wn Þ; wi ¼
c 2 IS ¼
c 1
i2S i2 S
! mjj r S ðMÞ riS ðMÞ j max ; min i2S jmii j riS ðMÞ j2S rjS ðMÞ
By hypothesis, rjS ðMÞ ¼ 0,
jmjj jrjS ðMÞ rjS ðMÞ
¼ 1.
Lemma 2 [10]. If matrix A is H matrix, then A1 hAi1 , A B is mean aij bij i; j 2 N. From the above lemma, S-SDD matrix is H matrix. Lemma 3 [8]. Let c [ 0, g 0, for 8x 2 ½0; 1, then 1 1 gx g ; 1 x þ cx minfc; 1g 1 x þ cx c
3 Main Results First, the upper bound of infinite norm of S-SDD matrix inverse is given. Theorem 1. Let A ¼ ðaij Þ 2 Cn;n is S-SDD matrix, then 1 A max 1
max vSij ðAÞ;
i2S; j2S
max vSij ðAÞ :
i2S; j2S
Proof. From Lemma 2, then A1 hAi1 , hAi1 0; n n P P hAi1 ; j 2 N; x ¼ ðs1 ; . . .; xn ÞT : Let xj ¼ hAi1 ¼ h¼1
jh
h¼1 T
jh
That hAix ¼ e ¼ ð1; 1Þ . Definition xi0 ¼ max xi , xj0 ¼ max xj , as for, hAix ¼ e, i2S
then
j2S
1408
Y. Li et al.
jai0 i0 jxi0
X
jai0 k jxk
X
k2S;k6¼i0
1 jai0 i0 jxi0
X
jai0 k jxi0
X
k2S;k6¼i0
h2S
h2S
jai0 h jxh ¼ 1
jai0 h jxj0 ¼ jai0 i0 j riS0 ðAÞ xi0 riS0 ðAÞxj0
that
jai0 i0 j riS0 ðAÞ xi0 riS0 ðAÞxj0 1
Namely
xi0
1 þ riS0 ðAÞxj0 jai0 i0 j riS0 ðAÞ
ð1Þ
On the other hand X X aj j x j aj h x h aj h x h ¼ 1 0 0 0 0 0 h2S
k2S
Then X X aj h x j aj h x i 1 aj0 j0 xj0 0 0 0 0 h2S
h2S
Namely 1 þ rjS0 ðAÞxi0 xj0 aj j r S ðAÞ 0 0
ð2Þ
j0
From formula (1) and (2) aj j r S ðAÞ þ r S ðAÞ 0 0 j0 i0 ¼ vSij ðAÞ xi0 S s S s jai0 i0 j ri0 ðAÞ aj0 j0 rj0 ðAÞ ri0 ðAÞrj0 ðAÞ xj0
jai0 i0 j riS0 ðAÞ þ rjS0 ðAÞ ¼ vSji ðAÞ s S s S jai0 i0 j ri0 ðAÞ aj0 j0 rj0 ðAÞ ri0 ðAÞrj0 ðAÞ
Namely 1 A max 1
max vSij ðAÞ;
i2S; j2S
max vSij ðAÞ
i2S; j2S
ð3Þ
Error Bounds for Linear Complementarity Problems of S-SDD Matrix
1409
Next, the error bounds of S-SDD matrix are given. Theorem 2. Let A ¼ ðaij Þ 2 Cn;n is S-SDD matrix, SN, S 6¼ ;, aii [ 0, let ~ ¼ I D þ DA ¼ ~aij , D ¼diagðdi Þ, 0 di 1, then 8i; j 2 N A ~ ~ ~ ~ di hSi ðAÞ ¼ hSi A ; dj hSj ðAÞ ¼ hSj A
ð4Þ
S ¼ NnS, then for i 2 S; j 2 S,
~ ~ S ~ ~ ~aii riS A ~ajj rjS A [ rjS A ri A
ð5Þ
~ The formula (4) obviously holds. Proof. By definition of A, 1 di þ di aij i ¼ j As for ~aij ¼ di aij i 6¼ j For 8i 2 S ~ ¼ 1 di þ di aii r S ðAÞ ~ di aii r S ðAÞ ~ ¼ di ðaii r S ðAÞÞ ~aii riS ðAÞ i i i For 8j 2 S ~ ~ ~ ¼ dj ðajj r S ðAÞÞ ~ajj rjS ðAÞ ¼ 1 dj þ dj ajj riS ðAÞ dj ajj rjS ðAÞ j
Then ~ ajj r S ðAÞÞ ~ [ di r S ðAÞdj r S ðAÞ ¼ r S ðAÞr ~ S ðAÞ ~ ð~aii riS ðAÞÞð~ j i i j j Namely, the formula (5) established. Theorem 3. Let A ¼ aij 2 Cn;n is S-SDD matrix, SN , S 6¼ ;, aii [ 0, let ~ is S-SDD matrix. ~ ¼ I D þ DA ¼ ~aij , D ¼diagðdi Þ, 0 di 1,then A A Proof. By definition ~ j~ aii j ¼ ~aii ¼ 1 di þ di aii di aii [ di riS ðAÞ ¼ riS A there is ~ajj [ rs ð AÞ. e Similar available, when j 2 S, j ~ is S-SDD matrix. From formula (5), A
1410
Y. Li et al.
Theorem 4. let A ¼ aij 2 Cn;n is S-SDD matrix, SN, S 6¼ ;, aii [ 0,let ~ ¼ I D þ DA ¼ ~aij , D ¼diagðdi Þ, 0 di 1,then A n o max n ðI D þ DAÞ1 1 max max nsij ðAÞ; nsji ðAÞ i2S;j2S
d2½0;1
ð:Þ
~ ¼ I D þ DA ¼ a~ij , From Theorem 3, A ~ is S-SDD matrix. Proof. let A there is When i 2 S, j 2 S, ~ vSij ðAÞ
1 dj þ dj ajj dj rjS ðAÞ þ di riS ðAÞ ¼ ð1 di þ di aii di riS ðAÞÞ 1 dj þ dj ajj dj rjs ðAÞ di riS ðAÞdj rjs ðAÞ
¼
di riS ðAÞ s 1di þ di aii di ri ðAÞ 1dj þ dj ajj dj rjS ðAÞ s s di ri ðAÞdj rj ðAÞ 1di þ di aii di ris ðAÞ 1dj þ dj ajj dj rjS ðAÞ
1 1di þ di aii di riS ðAÞ
1
¼
ð
þ
ð
Þð
Þð
1 minfaii riS ðAÞ;1g þ
1
Þ
Þ
ris ðAÞ 1 aii riS ðAÞ minfajj rjs ðAÞ;1g ris ðAÞrjs ðAÞ
ðaii ris ðAÞÞðajj rjs ðAÞÞ h n o i
aii ris ðAÞ min ajj rjs ðAÞ; 1 þ min aii ris ðAÞ; 1 ris ðAÞ ajj rjs ðAÞ n oh i ¼ minfðaii ris ðAÞÞ; 1gmin ajj rjs ðAÞ; 1 ðaii ris ðAÞÞ ajj rjs ðAÞ ris ðAÞsj ðAÞ ¼ nsij ðAÞ In the same way, h
nSji ðAÞ
n o i
min aii riS ðAÞ; 1 aij rjs ðAÞ þ min ajj rjs ðAÞ; 1 rjS ðAÞ aii ris ðAÞ n oh i aii riS ðAÞ ajj rjs ðAÞ riS ðAÞsj ðAÞ minfðaii riS ðAÞÞ; 1gmin ajj rjs ðAÞ; 1
then n o max n ðI D þ DAÞ1 1 max max nsij ðAÞ; nsji ðAÞ :
d2½0;1
i2S;j2s
Error Bounds for Linear Complementarity Problems of S-SDD Matrix
1411
4 Numerical Example 1 3 1 0 0 1 0 B 1 5 1 1 0 1C C B B 2 1 7 1 2 0C C Let A1 ¼B B 0:5 0:5 0:25 3 1 1 C, when S ¼f1; 2; 3g, S ¼ f4; 5; 6g, Easy to C B @ 0:4 0:2 0:5 2 6 3 A 0:33 0:4 0:4 1 1 3 verify A1 is S-SDD matrix, and I S ¼ 34 ; 45 , Applying Theorem 4, we get, max ðI D þ DA1 Þ1 1 21:56. d2½0;1n 0 1 3 2 2 ¼ f2; 3g, Easy to verify A2 is Let A2 ¼@ 2 6 2 A, when S ¼f1g, S 2 2 6 S-SDD matrix, and I S ¼ 43 ; 2 , Applying Theorem 4, we get, max n kðI D þ 0
d2½0;1
DA2 Þ1 k1 2:81. This paper focuses on the linear complementarity of S-SDD, obtain Simple form, An estimate only related to matrix elements, The validity of the estimation is illustrated by numerical examples Acknowledgements. This work was supported by Scientific research project of Yunnan Province 2019J0910 (Research on some estimations of the upper bound of the infinite norm for several subsets inverses of h-matrix).
References 1. Chen, X., Xiang, S.: Perturbation bounds of P matrix linear complementarity problems. Siam J. Control Optim. 18(2), 1250–1265 (2007) 2. Cottle, R.W., Pang, J., Stone, R.E.: The Linear Complementarity Problem. Academic Press, San Diego (1992) 3. Murtk, K.G.: Linear Complementarity, Linear and nonlinear Programming. Heldermann Verlag, Berlin (1998) 4. Li, W., Zheng, H.: Numerical Analysis on Linear complementarity problems. J. South China Normal Univ. (Natural Science Edition) 47(3), 1–9 (2015). (in Chinese) 5. Beramn, A., Plemmons, R.J.: Nonnegative Matrix in the Mathematical Sciences. Siam Publisher, Philadelphia (1994) 6. Pena, J.M.: A class of P matrices while applications to the localization of the eigenvalues of real matrix. Siam J. Matrix Anal. Appl. 22(4), 1027–1037 (2001) 7. Chen, X., Xiang, S.: Computation of error bounds for P matrix linear complementarity problem. Math. Program. 106(3), 513–525 (2006)
1412
Y. Li et al.
8. Li, C., Li, Y.: Note on error bounds for linear complementarity problems for B matrices. Appl. Math. Lett. 57, 108–113 (2016) 9. Cvetkovic, L., Kostic, V., Kovacevic, M.: Further results on H-matrics and their schur complements. Appl. Math. Comput. 198, 506–510 (2008) 10. Horn, R.A., Johnson, C.R.: Topics in Matrix Analysis. Cambridge university Press, Cambridge (1991)
PCA-Based DDoS Attack Detection of SDN Environments Li-quan Han and Yue Zhang(&) College of Computer Science and Engineering, Changchun University of Technology, Changchun, People’s Republic of China [email protected]
Abstract. Software defined networking, as a new network architecture, has the advantages of numerical control separation, open interfaces, and network virtualization. However, the new network architecture of SDN still faces the risk of being attacked by DDoS. DDoS attacks not only damage the hosts in the SDN network, but also have a serious impact on the entire SDN. This paper uses PCA to analyze network traffic and detect DDoS attacks. The experimental results show that PCA detection can detect DDoS attacks well. Keywords: SDN
DDoS PCA
1 Introduction According to ONF’s definition of SDN [1]: Software Defined Network, a new architecture that supports dynamic and flexible management, has the following three characteristics: (1) open and programmable network; (2) peel of the control plane from the network architecture; (3) logically centralized control. At present, there are many solutions to DDoS attacks under SDN. For example, Kazemian et al. [2] and Khurshid et al. [3] have completed real-time Checking the SDN policy, Shin and Porras et al. [4, 5, 6] solved the conflict of traffic table, Wang et al. [7] and Garg et al. [8] Studied the security problem of the packet payload in SDN, Hong [9] Solved the problem of poisoning network topology. And Dong et al. [10] gave a solution to the problem of low traffic DDoS attacks on SDN. However, previous research on DDoS detection on traditional networks [10], has a certain processing capacity for DDoS attack detection on SDN, but these works have some shortcomings. For example, Mousavi et al. [11] use entropy to analyze traffic, and when the characteristics of the traffic become large, false positives may be generated. Dong’s method [10] only studies low-traffic. This is an effective method to relief the low-traffic mode, but it cannot defense other types of DDoS attacks. Principal component analysis (PCA) is a very effective feature dimension reduction method.We extracted the main components from the redundant features of the packet traffic data collected in the SDN environment by PCA to analyze whether DDoS traffic exists in the network.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1413–1419, 2021. https://doi.org/10.1007/978-981-33-4572-0_204
1414
L. Han and Y. Zhang
2 Realted Work 2.1
SDN Matching Flow
In the packet matching flow is using limited storage space and computing resources. When the DDoS attacks in SDN, is easy to run out of these resources. According to theOpenFlow switch specification [12], SDN switches rely on flow tables to forward packets. Switches extract matching fields from packets based on packet typeto look for matches in the flow table. Matched packets can only select the flow table with the highest priority match, and this flow table must be selected for forwarding. The counter associated with the selected flow table will be updated, and the instruction set included in the selected flow table will be applied to the switch. 2.2
PCA in the Traditional Network
PCA maps the measured data to a new set of coordinate axes by coordinate transformation. These axes (known as spindles or components) have the property of being close to the direction in which the data changes most.
3 Model Analysis 3.1
Principal Component Analysis
Suppose we can collect packets in SDN. Where, OD pair represents a double of nodes and describes the source addresses and destination addresses of packets. When there are k entries in the network, there will be k2 OD pairs (assuming the flow on OD pairs is p), we can collect network flow for continuous w t seconds. The time can be split up into t segments, each of which is w seconds longer. We can adjust t to t1 , and w to w1 , so that t w ¼ t1 w1 . For the above reasons we can get a suitable variable t, that satisfies t\p. Matrix X represents the measurement matrix let be t p, which represents the time series of all OD flows. Every last column i represents the time series of the i th OD flows, and each row j represents the instances of all OD flows in time period j. For matrix XT X, it can be calculated as: X T X vi ¼ ki vi
ð1Þ
In formula (1), fvi ; i ¼ 1; ; pg are the eigenvector, and fki ; i ¼ 1; ; pg are the eigenvalue corresponding to each vi . Find the first nonnegligible principal component, r, and you can approximate the original matrix. We can be the first principal component r approximation to the original matrix. Anomaly detection depends on the separation of x (the vector of the i th row of the matrix x, all the flow at the i th interval) into normal and abnormal components, called the modeled part and residual part of x. And decompose a group of measurements of x over a given period:
PCA-Based DDoS Attack Detection of SDN Environments
x ¼ bx þ ~x
1415
ð2Þ
bx corresponds to the modeled part and x~ corresponds to the residual part. we need to sort the set of principal components corresponding to the normal subspace ðv1; v2 ; . . .; vr Þ is the column of the matrix P of size m r, and r is the amountof normal axes(Through the selection of molecular space in part B of this section). bx ¼ PPT x ¼ Cx; ~x ¼ I PPT x
ð3Þ
In formula 3, matrix C is a linear transformation projected onto a normal subspace, ~ said the linear transformation projected on the abnormal subspace. The while C occurrence of an abnormal quantity causes a large change in ~ x. Used to detect abnormal change of ~x useful statistical information is Squared Prediction Error (SPE): ~ SPE k~xk2 kCxk
2
ð3Þ
When SPE d2a is satisfied, it’s normal flow. If network traffic is normal, the SPE d2a indicates the confidence of the threshold at d2a . Jackson and Mudholkar developed A statistical test for residuals called Q statistic. The formula is as follows: 2 qffiffiffiffiffiffiffiffiffiffiffiffi 3h10 ca 2/2 h20 / h ð h 1 Þ 0 0 5 þ1þ 2 d2a ¼ /1 4 /1 /21
ð4Þ
Where h0 ¼ 1
m X 2/1 /3 ; / ¼ kij ; i ¼ 1; 2; 3; . . . i 3/22 j¼r þ 1
ð5Þ
And kj is the variance of projection the data onto the j th principal component kXvj k2 , ca is the percentage of 1 a in the standard normal distribution. Calculating the principal component of matrix X is equalto solve the symmetric eigenvalue problem for the matrix XT X, and the XT X is on behalf of the covariance between network data. Rows of the matrix X serve as points in Euclidean space, so we have a data set of t points in the IRp. The principal component vi is the i th eigenvector calculated by decomposition of XT X: X T X vi ¼ ki vi
ð6Þ
1416
L. Han and Y. Zhang
ki is the eigenvalue that corresponds to the principal component vi . On account of XT X is positive symmetry, The feature vectors are orthogonal, at the same time the corresponditng eigenvalues are real Numbers greater than zero. The eigenvalues (k1 k2 kp ), arranged from large to small, have unit norms. By using the Rayleigh Quotient of XT X, we can prove that the eigenvector is equal to the maximum residual energy. Write the k th principal component vk as: vk ¼ argmax kX
Xk1
kvk¼1
i¼1
Xvi vTi vk
ð7Þ
Calculate the set of principal components vi pi ¼ 1 is the same thing as computing the eigenvectors of XT X. The converted data can be detected according to the principal component space. TSuppose the function of principal axis I over time is Xvi , And you pffiffiffiffi can do that by dividing by ri ¼ ki the normalized unit length. Therefore, the principal axis i is: ui ¼
Xvi ; i ¼ 1; . . .; p: ri
ð8Þ
Formula 9 indicates that all OD pairs will generate one-dimensional converted data after being weighted by vi . ui is an orthogonal vector of size t, which represents the ith largest time variation trend Shared by OD pairs. The set of fui gpi¼1 is the trend of OD pair changing with time, which is called the characteristic flow of X. Matrix V of size p p can be used to arrange the set of principal components fui gpi¼1 . Similarly, we can get a matrix U of size t p, where i th column is ui . Then, V, U and ri can be written into each OD stream Xi : Xi ¼ UðV T Þi i ¼ 1; ; p: ri
ð9Þ
The element of the principal component fui gpi¼1 are called the singular value, and kXvi k ¼ vTi XT Xvi ¼ ki vTi vi ¼ ki . 3.2
Choice of Subspace
When we find that there are only r singular values, it means that X is on the rdimensional subspace of the IRp We approximate the original matrix as: 0
X ¼
Xr r¼1
ri ui vTi
ð10Þ
PCA-Based DDoS Attack Detection of SDN Environments
1417
4 Experiments and Results 4.1
Experimental Design
We usedmininet to build a ring network structure of SDN for testing, including 3 switches and 11 nodes directly connected to switches. Therefore, OD pair can be expressed as: ðo; d Þ; o ¼ 1; 2; ; 11; d ¼ 1; 2; ; 11; and o 6¼ d We used python-Scapy to generate virtual network traffic, and the topology is shown in Fig. 1. Let’s assume that “Host 10” is the victim server, and “Host 3, Host 6, Host 9” are three zombie computers.
Fig. 1. Experimental topology
4.2
PCA Test Results Analysis
We analyzed the flow data through PCA on the switch, which was shown in Fig. 2. When the normal flow occurs, the whole network area is stable and the principal axis (red line) tends to level. When A DDoS attack occurs, A large number of packets are all pointing to the same host, making the principal axis biased towards this part of the traffic. As shown in Fig. 3, we collected traffic in the network, and through statistical analysis, PCA detection can quickly find DDoS traffic after DDoS is initiated.
1418
L. Han and Y. Zhang
Fig. 2. Principal axis
Fig. 3. Test statistics
5 Conclusion In this paper, we can detect DDoS attacks. Due to the limitations of time, some aspects of this paper still need to be supplemented, for example, DDoS stream detection can be combined with the traceability of SDN to find the source.
References 1. Software-DefinedNetworking (SDN) Definition. https://www.opennetworking.org/sdndefinition/ 2. Kazemian, P., Chang, M., Zeng, H., et al.: Real time network policy checking using header space analysis. In: Presented as part of the 10th {USENIX} Symposium on Networked Systems Design and Implementation ({NSDI} 13), pp. 99–111 (2013) 3. Khurshid, A., Zou, X., Zhou, W., et al.: Veriflow: verifying network-wide invariants in real time. In: Presented as part of the 10th {USENIX} Symposium on Networked Systems Design and Implementation ({NSDI} 13), pp. 15–27 (2013)
PCA-Based DDoS Attack Detection of SDN Environments
1419
4. Porras, P., Shin, S., Yegneswaran, V., et al.: A security enforcement kernel for OpenFlow networks. In: Proceedings of the first workshop on Hot topics in Software Defined Networks, pp. 121–126 (2012) 5. Shirali-Shahreza, S., Ganjali, Y.: Rewiflow: restricted wildcard openflowrules. ACM SIGCOMM Comput. Commun. Rev. 45(5), 29–35 (2015) 6. Yorozu, Y., Hirano, M., Oka, K., Tagawa, Y.: Electron spectroscopy studies on magnetooptical media and plastic substrate interface. IEEE Transl. J. Magn. Japan 2, 740–741 (1987) 7. Wang, M., Zhou, H., Chen, J., et al.: An approach for protecting the openflow switch from the saturation attack. In: 2015 4th National Conference on Electrical, Electronics and Computer Engineering. Atlantis Press (2015) 8. Garg G, Garg, R.: Detecting anomalies efficiently in SDN using adaptive mechanism. In: 2015 Fifth International Conference on Advanced Computing & Communication Technologies, pp. 367–370. IEEE (2015) 9. Hong, S., Xu, L., Wang, H., et al.: Poisoning network visibility in software-defined networks: new attacks and countermeasures. In: NDSS, vol. 15, pp. 8–11 (2015) 10. Dong, P., Du, X., Zhang, H., et al.: A detection method for a novel DDoS attack against SDN controllers by vast new low-traffic flows. In: 2016 IEEE International Conference on Communications (ICC), pp. 1–6. IEEE (2016) 11. Mousavi, S.M., St-Hilaire, M.: Early detection of DDoS attacks against SDN controllers. In: 2015 International Conference on Computing, Networking and Communications (ICNC), pp. 77–81. IEEE (2015) 12. Openflow-switch-v1.5.1. https://www.opennetworking.org/wp-content/uploads/2014/10/ openflow-switch-v1.5.1.pdf. 14 Jan 2020
Risk Assessment of Sea Navigation of Amphibious Vehicles Based on Bayesian Network Jian-hua Luo1, Chao Song2(&), and Yi-zhuo Jia2 1
Military Exercise and Training Center, Army Academy of Armored Forces, Beijing, China 2 Department of Weapons and Control, Army Academy of Armored Forces, Beijing, China [email protected]
Abstract. This paper analyzes the risks related to amphibious vehicle navigation, evaluates the safety risks, and puts forward a feasible solution to the possible risks. Based on Bayesian probability statistics and network reasoning, a quantitative calculation and analysis of the system risk distribution is performed to obtain the maritime navigation safety estimation results of the amphibious vehicle. The results have been verified in the QRA case, which provides beneficial suggestions for its maritime navigation. Traffic accidents related to maritime navigation of amphibious vehicles were estimated by Bayesian learning and Bayesian point estimation in this paper, and an amphibious vehicle maritime navigation traffic system analysis model was established, and its relative risk QRA was obtained through Bayesian networks. The risks related to maritime navigation of amphibious vehicles were evaluated, a safety risk assessment was carried out, and a feasible solution was proposed for possible risks. Keywords: Amphibious vehicle assessment Safety assessment
Bayesian network Quantitative risk
1 Introduction Amphibious vehicle maritime navigation is an amphibious vehicle operating system composed of information, driver, amphibious vehicle and the environment [1]. It can be analyzed from the aspects of objective and subjective factors as well. With the development of amphibious vehicles, the safety of navigation at sea has become a concern. By analyzing the maritime navigation data of amphibious vehicles in combination with other ship’s travel data and accidents in safety management, it is found that this change is essentially the result of a qualitative analysis conducted on the accident and then a shift to a quantitative analysis performed on the safety [2].
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1420–1425, 2021. https://doi.org/10.1007/978-981-33-4572-0_205
Risk Assessment of Sea Navigation of Amphibious Vehicles
1421
2 Bayesian Networks Bayesian networks (BN) are directed acyclic graphs (DAG) and they are employed to model the domains that contain uncertainty in a certain way [3]. The nodes of the graphs represent the random variables and the state of each random variable is contained in each node. In addition, a conditional probability table (CPT), which is a conditional probability function (CPF), is contained in each node as well. A node’s CPT comprises the probability that the node will be in a specific state given its parent node’s state. Each character’s meaning and letter is as follows. Suppose the uncertainty of the network structure is represented by the random variable set X and the discrete variable h, where the hypnotic degree of the possible h h network is S and the prior probability is P S . As is shown in formula (1), PðSh jDÞ represents the posterior probability under the condition of random sample D. PðSh jDÞ ¼
PðSh ; DÞ PðSh ÞPðDjSh Þ ¼ PðDÞ PðDÞ
ð1Þ
Where PðDÞ represents the normalization constant. Naive Bayes learner, usually named Naive Bayes classifier (NB classifier), is one of the Bayesian learning methods and it is used commonly [4]. In this method, the root is represented by the class variable X and the leaf is represented by the attribute variable A. In the case a1 ; a2 ; . . .am of attribute values, each category’s probability can be calculated by Eq. (2). PðXja1 ; a2 ; am Þ ¼ aPðXÞ
m Y
Pðaj jXÞ
ð2Þ
j
Given the value that describes the instance, Bayes’ method is to classify new instances by making the most likely assignment to the target vMAP value. The calculation formula of vMAP is as follows (3). vMAP ¼ arg max Pðxi ja1 ; a2 ; am Þ xi 2B
ð3Þ
Then the method employed by the NB classifier can be obtained by substituting it into Eq. (4). vNB ¼ arg max Pðxi Þ xi 2B
m Y
Pðaj jxi Þ
ð4Þ
j
Which vNB represents the target value output by the NB classifier. Formula (5) can be equivalently expressed to be the maximization of ln, even if this number’s negative number is the smallest.
1422
J. Luo et al.
lnvNB ¼ arg minflnPðxi Þ xi 2A
m X
lnPðaj jxi Þg
ð5Þ
j¼1
The structure of the Bayesian network is given, and an attempt is made to learn this parameter [5]. Based on a fraction of the total opportunities observed, the probability has been estimated to date. In order to reduce the difficulties, the Bayesian method is adopted in this research to estimate the probability by employing the m-estimate as defined in Eq. (6). Pðaj jxi Þ ¼
nc þ m p m + n
ð6Þ
Here, the n total number of xi training examples that occur is nc defined as aj the total number of training examples that occur. p is a previous estimate of the probability that this study intends to measure and m represents a sample size constant.
3 Risk Assessment of Amphibious Vehicles at Sea 3.1
Bayesian Assessment Method for Maritime Risk of Amphibious Vehicles
The maritime traffic accidents of amphibious vehicles are random events and accords with the binomial distribution of amphibious vehicle marine traffic flow. The numerical characteristics of the probability distribution, including mean, change, deviation, can be used to describe the law of accident statistics. Based on the accident statistics, it is found that the accident samples’ distribution function in the maritime navigation system of the amphibious vehicle satisfies Eq. (7) when the statistical intervals or samples of the accidents are sufficient: PðX ¼ kÞ ¼ Cnk hk ð1 hÞðnkÞ ; k ¼ 0; 1; 2; n
ð7Þ
The parameter h has a prior distribution of Eq. (8) as follows: pðhÞ ¼
1 hða1Þ ð1 hÞðb1Þ bða; bÞ
ð8Þ
In addition, in the navigation system of the amphibious vehicles, the accidents’ posterior distribution satisfies Eq. (9) as follows: pðhjKÞ ¼
1 hða þ k1Þ ð1 hÞðn þ bk1Þ bða þ k; n þ b kÞ
ð9Þ
This equation is used to describe the probability of k accidents in k amphibious vehicle activities. The posterior distribution satisfies the equation of Eq. (10):
Risk Assessment of Sea Navigation of Amphibious Vehicles
h¼
m m 1X 1 X h1 ; S2h ¼ ðhi hÞ2 m i¼1 m 1 i¼1
1423
ð10Þ
At the same time, the relevant parameters conform to the Eqs. (11): 8 ^hE ¼ a þ k ; ^hMD ¼ a þ k1 > > aþ b þ n a þ b þ n2 > > > < ð1hÞh ^a ¼ h 1 S2h > > > ð1hÞh > ^ > b ¼ ð1 hÞ 1 : S2
ð11Þ
h
3.2
Probability Estimation of Amphibious Vehicle Navigation Accidents
Since the amount of amphibious vehicle accident data is small, and the amphibious vehicle is similar to the yacht when sailing at sea, the yacht accident data is added to this article to enrich the amphibious vehicle data information. Table 1 shows the amphibious vehicle and yacht sailing accidents in recent years [8]. Table 1. Navigation accidents of amphibious vehicles and yachts in recent years Numbering 1 2 3 4 5 6 7 8 9 10 11 12 13
ni 25956 26139 25408 24311 24677 21752 18462 19924 20290 20540 19936 19074 20440
ki 39 42 36 18 18 9 10 10 11 8 14 11 10
h 0.150% 0.161% 0.142% 0.074% 0.073% 0.041% 0.054% 0.050% 0.054% 0.039% 0.070% 0.058% 0.047%
According to Bayesian statistics in Part 4, the probability in traffic conforms to Eq. (12) as follows: 8 a ¼ 3:2502; ^b ¼ 4163:6
< aij ¼ n1 k6¼i P 1 ¼ xij xkj i 2 N; j 2 M; p 2 N b > : ij m1 p6¼i
A aij ði 2 N; j 2 M Þ reflects the strength difference between the j th index of the evaluated object ui and the whole n 1 evaluated object, while big reflects the strength difference between the j-th index of the evaluated object ui and the whole m 1 index.
4 Evaluation Content of Physical Education Teaching The determination of evaluation content is the most important, because teaching evaluation content can directly reflect the educational concept and teaching requirements, and guide the direction of teaching reform. The research shows that the current physical education evaluation focuses on physical ability and skills, neglects the evaluation of communication and social adaptation, mental and physical health, cooperation, sentiment and attitude, and the evaluation content is relatively single. The basic function of overemphasizing physical ability and skill evaluation is to develop physical ability and skill, and at the same time, to cultivate other abilities of students. However, it is found that the current teaching evaluation pays too much attention to the evaluation of students’ physical ability and skills, and ignores the evaluation of students’ practical ability. It is proposed to change the current situation that the content of PE teaching evaluation is single. Promote the all-round development of students. Neglecting the evaluation of communication and social adaptation from the research, the relevant scholars have pointed out that neglecting the evaluation of communication and social adaptation, in the research, they have put forward the diversification of
PE Teaching Evaluation Based on Stochastic Simulation Algorithm
1567
evaluation content, sports skills are no longer the only goal of sports learning, students’ social adaptability, cooperation spirit, physical and mental health have become the main goal of sports curriculum learning. Teachers are required to think and study constantly in order to keep up with the pace of social development and realize the allround development of students [4].
5 Conclusion In the past five years, the evaluation of physical education in China has been developing and improving constantly, and many problems and deficiencies have been found in the practice. In order to solve the problem, sports workers are actively engaged in the theory and practice of in-depth research, actively looking for improvement and improvement strategies. It is of great practical significance to promote the all-round development of students, promote the development of physical education and establish a perfect evaluation system. In order to seek a more scientific and reasonable objective evaluation system and achieve the expected teaching goal of comprehensive evaluation, physical education workers should continue to explore for a long time.
References 1. Wang, B., Qu, Z.: Theory of Physical Education Teaching. Adult Education. Sichuan Education Press, Chengdu (1988) 2. Wu, Z., Liu, S., Qu, Z.: Modern Teaching Theory and Physical Education. People’s Sports Publishing House, Beijing (1993) 3. Chen, G., Li, M.: Research on the integration of comprehensive evaluation methods based on method set (0). China Manag. Sci. 12(1), 101–105 (2004) 4. Yi, P., Guo, Y.: Multi-attribute decision-making method based on competition horizon optimization under the condition of non autocracy of weights. J. Control Decis. Mak. 22(11), 1259–1263 (2007)
Development and Design of Intelligent Gymnasium System Based on K-Means Clustering Algorithm Under the Internet of Things Han Yin(&), Jian Xu, Zebin Luo, Yiwen Xu, Sisi He, and Tao Xiong Department of Computer Science, Private Hualian College, Guangzhou 510530, China [email protected], [email protected], [email protected], [email protected], [email protected], [email protected]
Abstract. With the rapid development of the Internet of things technology in today’s society, the optimization design and development of intelligent stadium system has become a major trend. Based on the analysis of the requirements of the Internet of things technology for system design, the K-means clustering algorithm of customer classification is used to design an athlete-centered system to improve the service quality of athletes; the system architecture model based on the network is used to realize information interaction and resource sharing among users with different roles. Taking athletes and service management as the core content of the system design and development, the intelligent stadium system is optimized from the system outline structure, system function, Internet of things architecture, and other aspects. The results show that the development and design of the intelligent stadium system under the Internet of things technology is helpful to realize the intelligent management of the stadium, and also can improve the management efficiency of the intelligent stadium, increase 28.0%, and play a positive application benefit. The research results show that under the Internet of things technology, the development and design of an intelligent stadium system can play a positive role and can be applied in practice. The positioning deviation of the original system is corrected. Keywords: Intelligent stadium Design Internet of things development K-means clustering algorithm
System
1 Introduction Sports venues are professional sports venues for athletes to carry out sports training and sports competitions. Optimizing the management of sports venues plays an important role in the development of sports. Based on the Internet of things technology, optimize the development of an intelligent stadium system, play a positive impact [1]. With the development of the Internet of things technology in China, the Internet of things technology has been widely used in many fields. In this regard, the Internet of things can be defined as a kind of network “connected by things”. Through the Internet © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1568–1573, 2021. https://doi.org/10.1007/978-981-33-4572-0_229
Development and Design of Intelligent Gymnasium System
1569
of things technology, the ubiquitous terminal equipment, and intelligent control facilities in the real society can be integrated into various modern communication networks, to realize the intellectualization, interconnection, and intercommunication of entity and virtual. Similarly, in the Internet of things technology, it can also integrate a variety of network operating software based on cloud computing technology, which can connect the big data information in the intranet, private network, and Internet in different network environments, and extract useful information from it to play the application value. The Internet of things technology is not only a new generation of information and communication technology [1]. Based on the Internet of things technology, the optimization and development of an intelligent stadium system can realize the efficient management and control of the stadium and can give intelligent management and play a positive impact from the aspects of stadium lighting, monitoring, etc.
2 System Design Requirements Analysis The intellectualization of stadiums and gymnasiums is an important sign to realize the modernization of stadiums and gymnasiums. In this study, the application of the Internet of things technology to optimize the design of an intelligent stadium system should integrate automation, digitalization, and information management to ensure the realization of intelligent management of the stadium and ensure that the designed system meets the needs of the times. In the system design, it should be ensured that in the process of using the system, the cost of water, electricity, energy, and other resources of the stadium can be saved to the maximum extent. At the same time, it should also provide a healthy, applicable, and efficient environment for the consumers of the stadium. Moreover, in the design of an intelligent stadium, we should also optimize the design system decision-making from the aspects of system development cost, Internet of things technology, and technical feasibility according to the actual situation of the current stadium, based on the building material basis of the intelligent stadium, from the equipment monitoring of stadium In terms of fire alarm and intelligent security, optimize the functional scheme of the system to ensure the real intelligent management of the stadium. Optimize the design of the intelligent stadium system, and use the Internet of things technology to further improve the “intelligent” level of the intelligent stadium to ensure that the designed system meets the actual application needs [2].
3 Design and Develop Intelligent Stadium System Based on Internet of Things Technology 3.1
Overall Structure Design of the System
In the design of the intelligent stadium system, it can be based on modern building technology, modern Internet of things communication technology, and modern remote monitoring and control technology, integrate a variety of technologies and develop and
1570
H. Yin et al.
design the intelligent stadium system based on the Internet of things technology [2]. Therefore, in this design of the intelligent stadium system, from the Internet of things technology, database, and system function three modules, optimize the construction of the system. In the design of the intelligent stadium system, through the use of Internet of things related technologies, to ensure that the level of intelligent management of the stadium can be improved, to achieve the use of the system for intelligent management of the stadium. 3.2
System Function Design
Under the current Internet of things technology, the intelligent stadium system must have the management function related to the special system of sports venues, to ensure the normal operation of the organization and management of the stadium. Therefore, in this design of the intelligent system, based on the Internet of things technology, we should ensure that the system has the function of intelligent management of sports ticketing information, and also should have the function of the virtual stadium and intelligent storage of sports knowledge. The main functions of the system are shown in Fig. 1 [3].
Fig. 1. Boltzmann machine and restricted Boltzmann machine
Intelligent ticketing management function: in this design of the intelligent stadium system, it should include an online ticketing management service and intelligent ticket checking service. In the main practice process, it can combine the Internet of things technology and make use of the intelligent stadium system to ensure that the system users can provide some ticket information about the number of matches, venues, and time seats in the stadium through the Internet terminal, to ensure that people can reasonably arrange their own time in the stadium through ticket query. At the same time, in the system design, RFID and barcode can also be used to identify electronic bills, which can improve the system data security. In the ticket checking service of the system, the main thing is to be able to identify the electronic ticket information of users through RFID reader and barcode scanner, to intelligently manage the security work of the stadium. Virtual venue management function: optimize the construction of an intelligent stadium system based on the Internet of things technology, Realize the wireless extension of stadium management in space and time level and make use of the
Development and Design of Intelligent Gymnasium System
1571
convenience of Internet of things technology to ensure that people can log in to the system through intelligent terminal equipment so that people can get stadium information at any time and place, cross the obstacles of space and geographical location, and through the use of Internet of things perception, processing, and transmission technology, etc. By connecting the information of athletes, coaches and sports venues, the virtual processing of athletes’ sports information is convenient for coaches to listen to virtual simulation for athletes and make long-term sports arrangement plans for athletes. Such as virtual ticket sales, virtual competition, training, fitness base, etc. Similarly, in the design of the intelligent stadium system, it can also realize intelligent management of the security of the stadium based on the Internet of things technology, using remote video monitoring, video analysis, and sensor implantation technology to improve the performance of video monitoring in the intelligent stadium system, and effectively monitor and manage the security information of the stadium [4]. Intelligent sports knowledge reserve system: in the design of an intelligent stadium system, it is not only the service and sports training and competition but also the main place for people to keep fit. Therefore, in the design of the system, based on the Internet of things technology, a large knowledge base can be built to ensure that people can pass the system, Intelligent query to their sports-related knowledge. At the same time, the system should also have the function of data mining. With the support of Internet of things technology, it can integrate some knowledge of physical education, sports training parameters of athletes and sports experience of citizens, mine out useful information in it, publish it to the intelligent stadium system, for people to query ondemand, and play the usability of the system.
4 Cluster Analysis Algorithm Design It can be seen from the discussion of demand analysis that the classification of an intelligent gymnasium system is the premise and foundation of all kinds of athletes’ service and management, and the effective way to solve this problem is to use a clustering mining algorithm in data mining to process and select K-means clustering algorithm for specific application fields. K-means clustering algorithm, first through the analysis of the objective data of the number of categories determined by the users of the intelligent gymnasium system, and then according to the k-means algorithm flow to get the final clustering analysis results, making the final results more realistic and improve the classification effect. The main idea of this algorithm is to divide the process of data clustering mining into two steps. In the first step, the initial number k of clustering and the initial center value of each cluster is determined according to the sample data. In the second step, the K-means algorithm is used to mine the clustering data based on the initial K value and the initial center point determined in the first stage, and finally, the mining results are obtained. The hierarchical k-means algorithm can solve the problem that the algorithm depends on the initial K value and the central point value to minimize the impact of human intervention, the initial value of the algorithm is completely determined by objective data, which makes the results of clustering mining more scientific and reasonable. The specific algorithm steps are described as follows:
1572
H. Yin et al.
Step 1: process the data set objects and treat each sample as a cluster. Then the clustering set of n points can be expressed as shown in equation: U ¼ fui g; i ¼ 1; 2; ; n
ð1Þ
Step 2: select the calculation formula of the distance between samples, and then perform two operations on the sample data. Compare the set threshold value, and synthesize the samples whose distance is less than the threshold value into a class, and reduce the corresponding number of classes. Euclidean distance is used as the distance calculation formula: dij ¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ui1 uj1 2 þ ui2 uj2 2 þ þ uip ujp 2
ð3Þ
If the threshold value is w, then ui and uj meet the relationship of dij \w, then they are classified into one category [4].
5 Application Benefit Analysis Based on the deep learning algorithm of the Internet of things, the intelligent stadium system is developed and designed, which not only helps to realize the digital and information-based management of the stadium, but also can improve the management efficiency of the current intelligent stadium, increase 28.0%, and play a positive application benefit. Through pervasive computing, cloud computing, and artificial intelligence data mining technology, optimize the development of intelligent stadium system, in the foundation of the original intelligent stadium, and then optimize the allocation of resources in the stadium, optimize the construction of the intelligent stadium system, The establishment of new sports information and sports knowledge service platform, as well as the construction of intelligent sports prediction and monitoring management module, can better play the intelligent service decision-making for the management of sports venues, can also ensure the implementation of intelligent and orderly management of sports venues, and play a positive application benefit.
6 Conclusion In summary, Under the deep learning algorithm of the Internet of things, the development and design of the intelligent stadium system can enhance the core competitiveness of the stadium by strengthening the application of the Internet of things technology. According to its actual situation, relevant countermeasures are formulated, focusing on the management and maintenance system of customer relations, which has many defects, such as the function design of the system does not meet the current needs, and the data in the system is comprehensive The serviceability of the system to customers is relatively weak, and the ability of resource interaction and sharing is
Development and Design of Intelligent Gymnasium System
1573
weak. This paper studies and analyzes the customer management system currently used in the stadium, to achieve the intelligent control of the stadium system, improve the application performance of the intelligent stadium system, play a positive role, and promote the application of the system development scheme in practice. Acknowledgments. Special fund for science and technology innovation strategy of Guangdong Province (Special fund for “Climbing Plan”) in 2020——《Intelligent gymnasium management system》(Project NO: pdjh2020b1371).
References 1. Wang, G., Qiao, K., Lan, T., et al.: Intelligent stadium system based on the internet of things and its development trend. J. Changsha Univ. 26(5), 145–147 (2012) 2. Dai, X., Zhang, N.: Beijing stadium reservation and monitoring system based on the internet of things. Internet Things Technol. 7, 69–71 (2012) 3. David, S.: Wireless small cell technology trends. China Integr. Circ. 25(1), 85–86 (2016) 4. Shi, Y., Zhang, L.: Shi bole. – a method to choose the best web service. Minicomput. Syst. (04), 61–63 (2007)
Study on Pasteurization and Cooling Technology of Orange Juice Based on CFD Technology Meiling Zhang(&) Laiwu Vocational and Technical College, Jinan 271100, Shandong, China [email protected]
Abstract. To study the sterilization and cooling process of orange juice accurately, to ensure the quality of juice, and to reduce the energy consumption and production time, based on the analysis of the killing temperature of common bacteria in juice and the reliability of the model, the high-temperature sterilization and cooling process of canned orange juice was simulated. The results show that the optimal sterilization process is 95 °C, the 1020 s, and the optimal cooling process is 10 °C, 720 s. Keywords: CFD
Orange juice High-temperature sterilization Cooling
1 Introduction The existing sterilization technologies mainly include pasteurization, high-temperature short-time sterilization (100*135 °C and ultra-high temperature instantaneous sterilization (120–150 °C), while pasteurization (75–100 °C) is adopted by most manufacturers in the juice industry, and preheating technology is often used in the production process to achieve the effect of reducing sterilization time [1]. This leads to the problem of overheating in the process of juice sterilization. On the one hand, overheating destroys the original nutrition and taste of the juice, on the other hand, it causes an increase in energy consumption. The juice should be cooled before it is sterilized and filled. The cooling rate and energy consumption of different cooling mediums are very different. Computational fluid dynamics (CFD) is a numerical calculation tool based on computer technology, which is used to solve the flow and heat transfer problems of fluid. Compared with experimental research, the CFD calculation has many advantages, such as high flexibility, low cost, fast calculation speed, and strong adaptability. In the middle of the 20th century, CFD technology has been widely used in hydrodynamics related industries, including the food industry. In recent years, the increasing application of CFD in the field of food processing has been confirmed, such as the simulation of natural convection heating of canned food, the numerical simulation of the influence of temperature on the sterilization and vitamin destruction of three-dimensional packaging bag food, and the CFD simulation of sterilization time. Varma et al. did a simulation study on the sterilization of irregular packaging bags, and also conducted a simulation study on the natural convection heating of conical and cylindrical canned food. Ghani et al. created a thermal sterilization simulation model of © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1574–1579, 2021. https://doi.org/10.1007/978-981-33-4572-0_230
Study on Pasteurization and Cooling Technology of Orange Juice
1575
canned solid-liquid mixed food. Recently, Kannan et al. simulated the heat transfer of canned food in a static distillation pot, studied the heat transfer coefficient, and summarized the experimental correlation of Nusselt number with Fourier number. Anandpaul et al. simulated the temperature distribution in the tank during pasteurization of canned milk, and simulated the effect of rotating tank on sterilization, and concluded that rotation can reduce the time when the temperature in the tank reaches equilibrium. Anandpaul et al. simulated the sterilization of canned milk and obtained the viscosity analysis and temperature analysis of milk in the tank during the sterilization process. But up to now, there are few reports about the energy consumption analysis and process optimization of juice sterilization and cooling process at home and abroad. In this experiment, the high-temperature sterilization and cooling process of canned orange juice were optimized by CFD technology, combined with Carnot’s theorem and inverse Carnot’s theorem [2].
2 Establishment of High-Temperature Sterilization Tank Model 2.1
Can Body Model Parameter Setting
The tank height is 12.4 cm, the diameter is 5 cm, the parameters of purified water are q = 998 kg/m2, CP = 4182 J/(kg. K) and k = 0.6 W/(m K), and the heat convection density is a = 600 W/(m2 K). 2.2
Calculation Conditions
The CPU of the computer is Intel core-17 (2 * 2.67 GHz), 8GBRAM. Gambit version 2.2.30, Fluent version 6.2.16. The grid adopts a 3D model, coordinate centerx-axis, size 0.25, and total grid number 19000. The grid is shown in Fig. 1.
Fig. 1. Gridding of can bodies
The calculation unit is cm, the initial temperature is 23 °C (normal temperature), the boundary is set as wall (default aluminum material), and the internal is fluid. The temperature of pasteurization is 75–100 °C and the cooling temperature is 0–15 °C.
1576
M. Zhang
3 Test Verification Test materials: distilled water: self-made in our laboratory; metal tank (the size is consistent with the model, the height is 12.4 cm, the diameter is 5 cm): self-made in our laboratory. Test instrument: constant temperature water bath: sjh-4 s, Ningbo Tianheng instrument factory; temperature collector: fluke-netdaq32, fluke company of the United States; thermocouple: T type, Shanghai Shengyuan Instrument Co., Ltd. Test process: the thermocouple temperature probe collects the water temperature in the water bath and the temperature at the center of the tank. The heating temperature of the water bath is set at 90 C. after the test when the temperature of probe point 1 reaches and stabilizes at 90 °C, put the tank connected with probe point 2 into the water, and the tank is submerged. Then, the t.t2. T probe is collected synchronously to observe the stability of the temperature field of the water bath, so it is not analyzed. The collected T2 temperature distribution is shown in Fig. 2. It can be seen from Fig. 2 that the error between CFD simulation results and test results is within 5%, indicating that this simulation method is feasible.
Fig. 2. Schematic diagram of tank center temperature
4 Numerical Simulation of High-Temperature Sterilization of Canned Orange Juice The accuracy of the can body model has been verified. The following study is based on canned orange juice. Orange juice parameters: q = 1026 kg/m2, Cp = 3880 J/(kgK) and k = 0.596 w/(m K). 4.1
Sterilization Temperature Selection
The lethal temperature of bacteria in most fruit juices is not very high, and the higher ones are about 72C for E.coli, 71.1c for Salmonella, and 80 °C for Clostridium botulinum. Also, the pH value of juice is generally 2–4, which also plays a certain role in inhibiting the growth of bacteria [3, 4]. Given this, the sterilization temperature is set at 80 C.
Study on Pasteurization and Cooling Technology of Orange Juice
4.2
1577
Calculation of Unsteady Temperature Field of Different Sterilization Temperature
In the unsteady temperature field simulation of the model, it is not consistent that the center temperature of the model reaches the set sterilization temperature of 80 °C under each temperature sterilization condition, and the difference between the center temperature and the surrounding temperature is also not consistent. This shows that the time for the wall temperature of the tank to reach the center point is different under different sterilization temperatures, and the time for the center temperature to reach the sterilization temperature is one of the key issues in the research. Selected sterilization temperature: 85, 90, 95100 C. Figure 3 is the temperature distribution of the central section when the central temperature of the tank reaches the sterilization temperature at different times under the unsteady temperature field of 85*100 °C.
Fig. 3. Distribution diagram of temperature field of the central section when the temperature of central point reaches the sterilization temperature under different sterilization temperatures
When the sterilization temperature is 85, 90, 95100 C, the time for the core to reach the sterilization temperature 80 C is 1800, 1320, 1020840 s respectively, and the temperature difference between the center point and the tank wall also varies from several degrees to dozens of degrees. This is due to the unsteady temperature field, orange juice, and the surrounding tank wall through conduction and convection heat transfer, and conduction and convection need a certain time. In a short time, the heat transfer is limited, resulting in a certain temperature difference between the center temperature and the wall temperature. Different initial temperature difference results in different heat transfer rate, the larger the temperature difference, the faster the heat transfer.
1578
4.3
M. Zhang
Energy Consumption Analysis of High-Temperature Sterilization of Canned Orange Juice
During high-temperature short-time sterilization, the initial value of the enthalpy value of orange juice is the same because of the same initial temperature (23 °C in the calculation), which is 1148579 J/kg. The final value of the enthalpy value is 1369242 J/kg under the simulated heating sterilization condition of 85 C and 1800 S. we give the enthalpy value and enthalpy difference of various high-temperature shorttime sterilization processes. Suppose T2 is the initial temperature of orange juice (23 °C), t is the temperature of heating sterilization (85,90,95,100 °C respectively). From the Carnot cycle: q1 q2 T2 ¼1 q1 T1
ð1Þ
Heat absorbed per unit of orange juice: q2 = h 2 h 1
ð2Þ
Where: h1 – enthalpy of orange juice in unit mass at the initial state, J/kg; h2 – enthalpy of orange juice per unit mass after heating, J/kg. Mechanical power consumption per unit mass of orange juice for heating and sterilization: W¼
q1 q2 t
ð3Þ
According to formula (1)–(3), the energy consumption of different sterilization processes can be calculated. Through the comprehensive comparison of time consumption and energy consumption, the process with the least time consumption is 100 °C, but its energy consumption is the largest; the process with the least energy consumption is 85 °C, but its time consumption is too long. The combination of sterilization temperature and time at 95 °C and 1020 s is the best sterilization process.
5 Conclusions The high-temperature sterilization and cooling process of canned orange juice were simulated by CFD technology, and the reliability of the tank model was verified by experiments. According to the calculation of the unsteady temperature field of canned orange juice at different sterilization temperatures of 85,90,95.100 C, the best sterilization time at different temperatures was obtained: 1800,1320,1020840 s. The mechanical energy consumption per unit mass of orange juice was calculated by Carnot’s theorem, which was 33049067883 w respectively. The optimal sterilization
Study on Pasteurization and Cooling Technology of Orange Juice
1579
process is 95 °C. 1020 s. Based on this optimal process, different cooling temperatures of post sterilization cooling process are set as 15,10,5,0 C, and the optimal cooling time under different temperatures is calculated as 750720690660 s respectively. The mechanical power consumption of unit mass orange juice cooling is calculated as 20,23,26,28 w respectively by using inverse Carnot’s theorem. Combined with energy consumption and time consumption, the optimal cooling process is selected as 10 °C, 720 s. In terms of the selection of the optimal process, in the actual production, juice enterprises can select different optimal processes according to their conditions and different needs. This experiment can provide a reference for the optimization process.
References 1. Li, S.: The origin of the bard’s sterilization method. Brew. Technol. 6, 63–64 (1992) 2. Qiu, N., Luo, C., Yi, J.: Modern Juice Processing Technology and Equipment, pp. 45–51. Chemical Industry Press, Beijing (2006) 3. Zhang, G.: Soft Drink Processing Machinery, pp. 188–212. Chemical Industry Press, Beijing (2006) 4. Li, W.: Computational Fluid Dynamics, pp. 4–17. Huazhong University of Science and Technology Press, Wuhan (2004)
The Mode of Dynamic Block Transfer Teaching Resource Base of Distance Education Digital Virtual Machine Based on Cloud Computing Yanhong Zhang(&) Yunnan Open University, Kunming 650033, China [email protected] Abstract. In order to realize the sharing of learning resources in distance education and meet the needs of different learners, this paper studies the mode of dynamic transfer of teaching resource base of digital virtual machine in distance education. This paper discusses the mode construction of teaching resource base from three aspects of cloud computing technology, teaching resource base mode and key technology, expounds the logical framework and platform framework of resource base mode, and points out the broad prospect of cloud computing technology in the construction of distance education resource base. Keywords: Cloud computing Distance education dynamic transfer teaching resources
Digital virtual machine
1 Introduction The shortage of teaching resource database restricts the development of distance education. In order to promote the development of distance education, we must strengthen the construction of teaching resource database to meet the needs of different learners. Cloud computing technology is to integrate the basic hardware architecture, software resources and platform services provided by computer network into a supercomputer mode, which integrates the resources of different platforms. After integration, the resources can expand each other and work together. “Cloud” is a general term for computing resources existing on the Internet, which is an abstract representation of various resources, with scalability, virtualization and dynamic features. End users do not need to understand the details of cloud technology, only need to pay attention to how to get the learning resources they need through the network. This paper mainly analyzes the application of cloud computing technology in the construction of distance education digital teaching resource base [1].
2 Cloud Computing Technology Based on the network and a large number of computer devices, cloud computing technology distributes different tasks on the resource pool composed of these computers. Users can obtain corresponding information services, storage space and Research Title: A Research on the Support Service Model of Open University Distance Open Education under the Background of “Internet + Education” (Project Number: 2018JS372). © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1580–1585, 2021. https://doi.org/10.1007/978-981-33-4572-0_231
The Mode of Dynamic Block Transfer Teaching Resource Base
1581
computing power according to their own needs. When users use the cloud computing mode, they only need to pay some fees to the “cloud computing” service provider, and then they can obtain the corresponding resources. “On demand service” is an important feature of cloud computing. Resources in the resource pool can be flexibly allocated according to needs, and can dynamically carry large capacity information processing requirements. According to the service type and application scope, cloud computing can be divided into different types. According to service types, cloud computing can be divided into three categories: infrastructure as a service (IAAs), platform as a service (PAAS) and software as a service (SaaS). According to the application scope, cloud computing can be divided into three types: public cloud, private cloud and hybrid cloud. Different types of cloud information security coefficients are also different. If the requirements for information security are low, you can rent a public cloud. If the requirements for information security are high, you can use computer technology to build your own private cloud or hybrid cloud.
3 The Model of Distance Education Digital Teaching Resource Base Based on Cloud Computing 3.1
Logical Architecture of Resource Base Pattern
The logical structure of cloud computing based distance education digital teaching resource base is shown in Fig. 1. The resource “cloud” in distance education is composed of various network learning resource libraries. Through the cloud platform, the user terminal can choose learning resources at will and match the best path for transmitting learning resources. Because the cloud platform is composed of multiple servers, the transmission of data resources can also be completed by multiple servers [2]. Therefore, if a server fails to work normally, you can choose other servers, which will not delay the transmission of data. The cloud platform can realize the sharing of different learning resources, and the end-user application can access any learning resources with a unified resource list. When users access learning resources, in order to improve the speed of access, the system will first automatically analyze the IP address of users, determine the route, and establish the best learning resources link. Cloud computing has a strong support service capability. Users can access information from multiple information resource bases only by registering once. In this way, software and hardware resources and network resources in the cloud can be fully utilized to meet the needs of “learning from time to time, learning everywhere, and learning for everyone” in learning society. In addition, the distance learning terminal can be installed in the broadband television (IPTV), personal digital assistant (PAD), personal mobile phone, personal computer, expanding the scope of use of the terminal. 3.2
Platform Architecture of Pattern
After analyzing the logical architecture of the pattern, we need to further study the platform architecture of the pattern. According to the current situation of network infrastructure of distance education system, the platform architecture of learning
1582
Y. Zhang
Fig. 1. Logical framework of distance education digital teaching resource model based on cloud computing
resource base mode of distance education based on cloud computing is built. The application of cloud computing technology provides better technical support for learners. The purpose of this study is to provide students with standardized, integrated, efficient, scalable and interactive teaching resource base. The end-user user can also become the builder and manager of the resource base, so as to realize the co construction and sharing of resources among the multi platforms of distance education and the high utilization rate of resources. The platform of the cloud computing based digital teaching resource base model of distance education includes the client user interface layer, the sharing service layer, the platform service layer and the infrastructure layer [3]. (1) Client user interface layer. The users of this layer mainly include system administrators, resource administrators, teachers, students and tourists. The system administrator is mainly responsible for authorization management, security management, and technical support; the resource administrator is mainly responsible for resource audit, upload, authority allocation and other operations; teachers, students and tourists are mainly responsible for browsing, querying, uploading, downloading and other operations of learning resources according to their different authority. (2) Shared services layer. This layer is at the top of the cloud platform, including infrastructure, basic office, student status management, academic management, online teaching and research, comprehensive evaluation, teaching evaluation, etc., which embodies the multiple functions of the digital resource base for distance education, and provides terminal services with multiple interfaces, reflecting the sharing logic of the resource base model. (3) Platform service layer (management middleware layer). This layer is the central layer and core part of the distance education digital teaching resource base model platform, mainly composed of management modules, such as user management module, service management module, resource management module, monitoring management module, log management module, automation management module, process service management module, backup management module, etc. These modules can be freely combined and expanded to realize the sharing of cloud resources.
The Mode of Dynamic Block Transfer Teaching Resource Base
1583
(4) Infrastructure layer (resource layer). This layer is at the bottom of the cloud platform, which is the foundation of cloud sharing and plays a supporting role for the whole cloud sharing system platform. The infrastructure layer consists of two parts: virtual resource layer and physical resource layer. The virtual resource layer mainly includes storage resource pool, computing resource pool, software resource pool, data resource pool, etc. these different resource pools form a resource pool that can achieve unified management through high virtualization. The physical resource layer is based on the long-distance education equipment, such as storage equipment, management server, database, network equipment, etc. Through the infrastructure layer, cloud computing can form a large number of resources into an IT resource pool, creating highly virtualized dynamic learning resources for learners to use.
4 Key Technology 4.1
Virtualization Technology
Virtualization is a kind of technical means, which can realize resource sharing and divide a single physical resource into several independent resources. The advantage of virtualization is that it can create a more flexible system at a lower cost. It can not only easily and flexibly deploy teaching resources, but also has the characteristics of high security and easy management. Virtualization builds virtual computer environment based on network infrastructure, which can provide multiple virtual computers at the same time. Users can install computer application software and operating system in each virtual computer environment, and obtain the effect of multiple computers. Virtual computer environment manages the network interface, hard disk, memory and processor of computer, and allocates these devices to different virtual computers to form a virtual environment, which can be called virtual machine manager (VVM). The use of virtualization technology makes cloud computing environment have a strong ability of resource management. In order to meet the technical requirements of virtual environment, we can choose commercial system, but we still need to solve the technical problems of virtual machine dynamic migration, distributed resource scheduling (DRS), disk storage migration, virtual machine backup, virtual machine fault tolerance and so on. 4.2
QEMU
QEMU migration protocol specifies the storage format of QEMU virtual machine and the control signals of the migration end and the migration end. The protocol is located in the application layer, based on TCP connection, and the interactive content is in binary form. The virtual machine format design is based on the virtual machine save and read format. QEMU protocol divides the migration process into four parts: start, iteration, end and completion, that is, the migration stage in which the migration end and the migration end pass the four control data.
1584
Y. Zhang
Start stage (QEMU_ VM _ SECTION_ Start) after sending the migration start signal, send the relevant attributes of the device in turn, including its device name, internal device ID and version ID. After that, the migration end sets the relevant settings for the device migration, and then sends the end of packet marker data to the migration end. Iteration phase (QEMU_ VM section part) in the iteration stage, the amount of data sent is large, so the device storage data is divided into multiple groups, and each group of data is transmitted repeatedly until the remaining migration data reaches the threshold value. Each group of data sent will contain the data sent by QEMU_ VM_ SECTION_ _ There are three types of package header, which consists of part and internal device ID. The first is the device storage content itself. The second is the disk migration progress, and the third is the dirty data sector code. The iteration phase consists of two phases: disk body migration and dirty data synchronization. End stage (QEMU_ VM_,SECTION_ End) at the end stage, first send the end header ID, internal device ID, then send the dirty data, and finally send the progress of migration completion. It is worth mentioning that the virtual machine should be in the state of stopping operation at this stage. Completion stage (QEMU_ VM. _ SECTION_ _ Full) the completion phase corresponds to the start phase. First, send the completion phase header ID, and then send the relevant attributes of the device in turn, including the device name, internal device ID, version ID, etc. The overall framework of block migration algorithm based on memory cache is as follows:
Definition: - max .dowm_ Bytes; the amount of data that can be migrated within the shortest downtime that users can accept. -mig_ bgn: migration start flag package -mig_ eos: migration end flag package connection is established ; set block migration parameters; set dirty bitmap ; send mig bgn; WHILE (dirty < max_ down_ bytes) IF bulk is not complete THEN send bulk data; ELSE send dirty sector number; END WHILE WHILE ( remaining > 0) send remaining dirty sector number; END WHILE send mig e0s;
The Mode of Dynamic Block Transfer Teaching Resource Base
4.3
1585
Large Scale Learning Resource Processing Technology
Parallel processing technology is used for learning resources, and MapReduce technology is used in this study. This technology is a distributed parallel computing model, which is mainly used to process large-scale data sets. The basic idea is to decompose the problems to be executed into map and reduce operations, that is, to cut the data through the map program to form different blocks, and then allocate these data blocks to different computers for processing, so as to get the results of distribution operation, and then through reduce The program summarizes the results of parallel processing, and finally outputs the results required by users. g.
5 Conclusions With the increasingly fierce competition between domestic and foreign distance education institutions, the construction of teaching resource base plays an increasingly obvious role in improving the core competitiveness of distance education. More and more mature cloud computing technology will not only provide more effective resource services for distance education institutions and end users, but also become the future development trend of distance education resource library construction, with broad application prospects.
References 1. Wang, W.: Analysis of cloud computing and its application in higher vocational education. Sci. Technol. Inf. 34, 232 (2010) 2. Chen, Y.: Discussion on the construction of teaching resource base of vocational education major. Vocat. Educ. Forum 8, 52–54 (2011) 3. Wang, Q., He, L., Zhao, Y., et al.: Virtualization and Cloud Computing, pp. 126–127. Electronic Industry Press, Beijing (2009) 4. Sha, H., Yang, S.: Key technologies of cloud computing and research on cloud computing model based on Hadoop. Softw. Guide 9(9), 9–11 (2010)
Computer Fractal Technology Based on MATLAB Weiming Zhao(&) Department of Computer Science, Private Hualian College, Guangzhou 510663, China [email protected]
Abstract. This paper analyzes the types of image enhancement technology, mainly from two aspects of image space domain enhancement technology and frequency domain enhancement technology, explores its application methods and effects and puts forward the shortcomings of this technology, which is a kind of promotion for the further development of image enhancement technology. For example, using IFS system theory and l system theory in fractal geometry, using MATLAB language to generate self-similar and exquisite fractal patterns. It provides a new method for landform description, biological simulation research, art design, and other fields. Keywords: Fractal
Computer graphics L system MATLAB Type
1 Introduction With the development of computer technology, computer image processing technology has also achieved a leap-forward development. At present, the application of computer image processing technology is very extensive, almost all the fields related to imaging need to be applied to computer image processing technology. The image will be affected by many factors in the formation process, which leads to some differences between the image and the original scenery or image, and the imaging effect is not very ideal. To change this situation, we need to improve the quality of imaging using image enhancement technology. Image enhancement refers to highlighting some important information in the image and eliminating some unnecessary information according to specific requirements. Through the enhancement of information processing, improve the clarity of the original image processing, improve the information content of the graphics, and achieve the effective monitoring and measurement of the objects of interest in the image to meet the needs of people. If you use the initial and generative elements in Fig. 1, you can construct a snowflake shape or Koch curve. Each line segment in the initial element is replaced by four equal segments at a time. The scaling factor is 1/3, so the fractal dimension D ¼ ln4=ln3 1:296. The length of each initial element increases by 4/3 factors, so when more details are added. The length of the fractal curve tends to be infinite.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1586–1591, 2021. https://doi.org/10.1007/978-981-33-4572-0_232
Computer Fractal Technology Based on MATLAB
1587
Fig. 1. Initial and generative elements.
2 Types of Computer Image Enhancement Technology According to the scope of image enhancement, image enhancement technology can be divided into two types: spatial domain enhancement technology and frequency domain enhancement technology. Image spatial domain enhancement technology can directly process image pixels. Spatial domain enhancement methods are divided into three types: gray-scale transformation method of extended contrast, correlation smoothing method of noise elimination, correlation sharpening method of edge enhancement. Gray histogram equalization is a kind of image enhancement technology with a strong effect. The principle of this technology is to process the gray histogram of an image to make it change to a certain extent and become an even or even image [1]. This method can make the number of pixels distributed in each gray level equal, reaching a basic average, which is a typical image spatial processing technology, but the gray histogram is only designed according to an approximate probability density function, when the discrete gray level changes, it cannot achieve the final completely uniform effect. 2.1
Frequency Domain Enhancement Technology
Frequency domain image enhancement technology is to process the relevant information in the process of the image changing from space to frequency, and then reverse change the results to achieve the goal of image processing. The commonly used methods of frequency domain enhancement are Fourier transform, Walsh Hadamard transforms, DCT transform, and so on [2]. Frequency domain enhancement technology mainly includes the following core technologies. The commonly used low-pass filter in the frequency domain has the following types: (1) ideal low-pass filter. This kind of filter needs to set the cut-off frequency away from the origin and work through the corresponding function settings. The high-frequency component contains a lot of edge information. The ideal low-pass filter can remove the noise, but at the same time, it will lose some edge information, which will have a certain impact on the clarity of the image, leading to the edge of the image fuzzy. (2) Butterworth low passes the filter. This kind of filter also needs to set the transfer function, compared with the ideal filter, its steep change is not obvious, the continuity attenuation then shows the obvious discontinuity characteristic. Using a low-pass filter can not only suppress noise but also effectively reduce the blur of the image edge without the so-called ringing effect. (3) exponential low-pass filter. This kind of filter is also a kind of filter commonly used in image processing technology. The work of this kind of filter also needs to set the
1588
W. Zhao
corresponding transfer function. Using this kind of filter can not only suppress the noise, but also improve the fuzzy degree of the image edge, and will not produce the obvious ringing effect.
3 The Principle of Image Generation by the IFS System The iterated function system is an important branch of fractal theory and one of the most promising fields in fractal image processing. Ifs system regards the image to be generated as a collage of many self-similar or self-affine blocks. The white similarity is realized by self similar transformation, and self-affine is realized by a self affine transformation. Similarity transformation refers to a kind of scale transformation in which the transformation ratio must be the same in all directions, and affine transformation refers to a transformation in which the ratio changing in different directions can be different [3]. Intuitively, similar transformations can enlarge or shrink or even rotate, but they do not deform: affine transformations may deform. Of course, similarity transformation is a special case of an affine transformation. The mathematical expression of affine transformation is as follows:
x0 ¼ ax þ by þ e x 0 y ¼ cx þ dy þ f
where x stands for affine transformation. x and y are the coordinate values of the graph before the transformation, x0 and y0 are the coordinate values of the graph after transformation; a; b; c; d; e; f is the affine transformation coefficient. With the iterative function system, many natural landscapes can be generated. So, how is such a figure controlled by a program? From the perspective of the application, that is affine coordinate transformation. What is the affine coordinate transformation? It is the superposition of three effects of rotation, distortion, and translation 15. The basic starting point of IFS is: under the affine transformation, the whole and the part, the part and the part of the fractal set have a self similar structure, so several affine transformations that compress the whole to the part and transform the part to the part are selected for random iteration to get all kinds of fractal sets. In the part of the simulation results of this paper, we will use the IFS system introduced above to generate geomorphic features with MATLAB language [3–5].
4 The Making of Matlab Function MATLAB has a large number of library functions, and also allows users to customize functions. Write the following two lines of statements into the M-file editing window, save them, and automatically name them as F.M.
Computer Fractal Technology Based on MATLAB
1589
function w ¼ f ðx; y; zÞ w ¼ x:3 2 y:2 2 z þ 5 These two statements construct a simple custom function. Enter f (1, 2, 3) in the command window, and press enter to output w = − 8. You can also call the function f (x, y, z) in other programs (or custom functions), for example, the function FP () is defined below, and the function f () is also called in this function. function wp ¼ fpðx; y; zÞ wp ¼ f ð1; 2; 3Þ þ f ðx; y; zÞ Enter FP (1, 2, 3) in the command window, and press enter to output WP = − 16. 4.1
Three Dimensional Drawing of MATLAB 1 function PLOT3 () t = 0:pi/50:10*pi; plot3(sin(t), cos(t), t) (Fig. 2)
Fig. 2. (a) Three dimensional spiral curve
The following section: [X, Y] = meshgrid([− 2:0. 1:2]); Z = X. *exp(−x.’ 2 − Y. 2); plot3(X, Y, Z) (Fig. 3).
1590
W. Zhao
Fig. 3. (b) 3D surface.
5 Application of Image Enhancement Technology Image processing technology has always been a very important technical project in the field of image processing. Image enhancement technology is mainly through a series of technical means to selectively clarify and highlight the characteristics of people’s interest in the image, or to suppress some unnecessary information in the image. The application of image enhancement technology is mainly to improve the image quality To enrich the relevant information and promote the image to achieve effective interpretation and recognition. Image processing technology has a wide range of applications in the field of medicine, remote sensing, microbial research, a criminal investigation, and military. This technology can identify the patterns of the original image and help to achieve effective monitoring of the target.
6 Conclusions The combination of fractal geometry and computer graphics results in fractal graphics. In addition to computer simulation language, the image will be affected by many factors in the process of generation, transmission, and change, thus affecting image quality. With the help of image enhancement technology, it can effectively enrich the information of the image, improve the quality of the image, and promote the effective judgment and recognition of the image. The computer image enhancement technology based on MATLAB has the advantages of rich functions and fast image processing speed. With the help of this image processing technology, image enhancement can achieve good results and meet the needs of different occasions, but there are still some defects in this technology, which need to be further improved so that its application and promotion value can be further improved.
Computer Fractal Technology Based on MATLAB
1591
References 1. Jing, Q.: Analysis of image enhancement technology based on MATLAB. Electron. Des. Eng. 18, 87–89 (2017) 2. Hao, Z.: Research on image enhancement technology based on MATLAB. Inf. Comput. (Theor. Ed.) 9, 79–81 (2015) 3. Hu, D., Wang, J., Zhang, R., He, K.: Application of fractal pattern and chaos pattern. Comput. Eng. Des. 28(4), 893 (2007) 4. Sun, B.: Fractal algorithm and programming – Visual C++ implementation, pp. 86–87. Science Press, Beijing (2004) 5. Manyn, T.: A new approach to morphing 2D af6ne IFS fractals. Comput. Graph. 28(2), 249– 272 (2004)
Construction Collaborative Management Method Based on BIM and Control Calculation Qiang Zhou and Xiaowen Hu(&) Nantong Institute of Technology, Nantong 226002, Jiangsu, China [email protected], [email protected]
Abstract. BIM Technology is an important method to solve the problem of coordination in the construction process of engineering projects. The construction project construction collaborative management method combined with control computing can be visualized in real-time and can dynamically control the calculation and simulation process. Through the analysis of the important role of the combination of control type calculation and BIM for construction collaborative management, the application of based on BIM theory, engineering collaborative management theory, and multi-objective optimization theory, the multi-objective optimization model based on IFC is established, and the combination scheme of control computing, coevolutionary algorithm, and BIM is designed to realize the calculation of multi-objective optimization model. The collaborative management model of the construction stage is constructed to integrate the multi-objective optimization process and the collaborative workflow of all participants in the construction stage. The construction collaborative model can effectively realize the collaborative work among the participants in the construction stage, the collaborative optimization of multi-objective, and the real-time visualization and real-time control of the process. Keywords: BIM Control computing Collaborative management
Construction management
1 BIM and Control Computing 1.1
The Necessity of the Combination of Bim and Steerable Computing
In the 1990s, Marshall et formally defined the concept of driving computing for the first time and thought that driving computing is a highly interactive simulation method, which can not only visualize the middle results of simulation in real-time but also feedback control the simulation process in real-time. In other words, people can monitor the current calculation status according to the visualization results, analyze and make decisions on the intermediate data in time, and input human judgment and decision into the current calculation process in real-time, to realize the control of the calculation process and trend. This is an important direction of the development of numerical simulation technology [1], which not only reduces the complexity of the numerical simulation but also greatly reduces the complexity of numerical simulation © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1592–1597, 2021. https://doi.org/10.1007/978-981-33-4572-0_233
Construction Collaborative Management Method Based on BIM
1593
The simulation time is shortened. It is worth noting that the control type calculation can be easily realized, without changing the original calculation program, only the driving control mechanism is inserted into the important nodes of the calculation program, and the independence of the original calculation program is not affected. At present, harness computing technology has been well applied in medicine, astrophysics, physics, environmental science, and other fields. 1.2
The Combination of BIM and Steerable Computing
By inserting the steering control program into the important nodes (milestone events, nodes of each layer, etc.) of the BIM simulation program, the combination of Bim and steerable computing is realized. After inserting the steering calculation program, the simulation-based on BIM can still maintain the independence of the program. The steerable computing system can realize the real-time visualization of simulation through the visualization system of BIM, and BIM can realize the real-time control of the simulation process with the help of the steering control system. In BIM During the simulation (such as progress and cost simulation), the relevant participants of the project can monitor the changes of cost and schedule in real-time through the visual interface. When the progress delay and cost overrun are found, the user can immediately suspend the simulation and modify the relevant parameters (such as making the simulation back to the previous node; adjusting the schedule, replacing relevant materials; modifying the contract) The simulation is carried out in the desired direction. And users can control the variable method (modify a parameter, other parameters remain unchanged) to quickly find out the reasons for schedule delay and cost overrun. Project construction cooperation model based on BIM and control calculation.
2 Quality Schedule Cost Collaborative Optimization Model 2.1
Multi-objective Management in the BIM Environment
BIM collaborative management platform integrates a large amount of data and is goaloriented. At the same time, the BIM model is based on parametric modeling, and the parameters include all the data of digital building components, so BIM provides a reliable data source. In the construction stage, BIM can optimize the construction process and objectives through multi-dimensional simulation, at the same time, it can make decisions intuitively according to the visual results. IFC Standard is the standard format of BIM exchange, which contains all kinds of information on the project life cycle. In the IFC file, process and other related entities express the progress information. For example, in the construction phase, it is clear that the construction activities are represented by the if task entity, which is derived from the process entity. The entity ifcrelsequence describes the order of these tasks, such as start to start, start to end. Bundle, etc. Ifc4 version of the entity if the task has six important attributes, such as predefined type is used to define the type of task; task time refers to the time parameter
1594
Q. Zhou and X. Hu
related to duration. Multiple entities ifcstask form a many to one relationship with if work schedule through the relationship entity. The description of quantity information is not enough, and IFC needs to be expanded according to the needs of the construction phase [2]. To sum up, the relationship between schedule, cost, and quality in the IFC Standard is shown in Fig. 1.
Fig. 1. Schedule, cost, and quality information in IFC
2.2
Multi-objective Collaborative Optimization Model Based on IFC
In project management, the three objectives of the construction period, cost, and quality are closely related. Adjusting one of the variables will inevitably affect the related objectives, and then affect the overall benefits of the project. Therefore, in the process of target control, we should stand on the height of the overall interests of the project, comprehensively consider the three major objectives, and coordinate optimization, to achieve the satisfactory results of all parties. The multi-objective collaborative optimization model based on IFC is established by extracting the information of duration, cost, and quality from IFC files [3]. (1) Duration optimization model. The construction period is the time spent from the start of construction to the completion of construction. In the IFC file IfcTaskTime. ActualDuration Represents the actual duration of an operation. The duration of a project can be calculated by extracting and summing it. minTc ¼
X
tmn
mn2Gi
S: T:
X mn2Gi ;Gi 2G
tmn \Tr
ð1Þ
Construction Collaborative Management Method Based on BIM
1595
s L tmn tmn tmn
Where G is all the routers in the network plan; Gi are all the processes on the critical path-if task; tmn is the actual duration of operation mn - IfcTaskTime. ActualDuration SMNT is the shortest duration of process mn-TaskTime. ScheduleFinish(LESSTHANORE-QUALTO)-TaskTime. ScheduleStart(GREATERTH L ANO-REQUALTO); tmn is the maximum duration of operation mn- TaskTime. ScheduleFinish(LESSTHANORE-QUALTO)-TaskTime.ScheduleStart(GREATE RTHANO-REQUALTO); Tc is the calculated construction period; Tr is the required construction period; (2) Cost duration model. Dai Hongfu put forward that through the analysis and processing of the actual data, the relationship between the continuous working time and the cost of the process is a quadratic curve. This model adopts the quadratic relationship between the construction period and cost and establishes the following model from the relevant information in IFC documents: the project cost is comN 2 posed of direct cost Cmn þ emn tmn tmn and indirect cost l Tc . In the following model, the diminishing marginal utility effect and the impact of construction period reward and punishment system on cost are considered, that is, b (Tc – Tr ). minC ¼
X
N 2 Cmn þ tmn þ tmn þ l Tc þ bðTc Tr Þ
ð2Þ
S: T: tm þ tn tmn 0 b1 Tc Tr 0 b¼ b2 Tc Tr 0 Where, Cmn is the direct completion cost of operation mn under normal conditionIf Cost Valuemn ; emn is the incremental factor of marginal cost; tNmn is the normal duration of operation mn.-IfcTaskij :TaskTime: ScheduleFinish(EQUALTO); tmn is the actual duration of process mn-IfcTaskij :TaskTime: ActualDuration; tn is the start time of event n-IfcTaskij :TaskTime: actual start; l is the indirect rate; b is the reward and punishment coefficient of construction period; b1 is the penalty coefficient of the construction period; b2 is the reward coefficient of the construction period. (3) Multi-objective collaborative optimization model. The smaller the objective function value, the better the optimization effect.
3 The Multi-objective Collaborative Optimization Process By introducing a coevolutionary algorithm to calculate the above-mentioned optimization model, because this algorithm takes full account of the association between the objectives in solving the multi-objective collaborative optimization problem, uses
1596
Q. Zhou and X. Hu
the cooperative method to deal with the conflicts, and uses the aggregation density to update the optimal solution set To obtain the optimal solution set with good distribution and uniformity [4]. Therefore, the multi-objective optimization in the construction stage includes the following four processes: one is to read the relevant data in the BIM database; the second is to call the coevolutionary algorithm to calculate the established optimization model; the third is to use Bim in the operation process of the algorithm The simulation system can realize real-time visualization, and control the process by using the control system according to the feedback results; fourth, test whether the optimization results meet the requirements and output the optimal solution set. The introduction of coevolutionary algorithm and steerable computing in BIM brings many conveniences: firstly, the rich information contained in BIM model is used for multi-objective optimization of schedule, cost, and quality; secondly, in the process of optimization, it relies on BIM And the control type calculation realizes the visualization of the calculation process, and the operator can directly monitor the calculation process;
4 Construction Cooperation Model Establishment The current controllable computing system can realize the real-time visualization and control of the process, but the automatic optimization function is less. In the scheme design of the combination of controllable computing and BIM, the coevolutionary algorithm is introduced to realize the automatic optimization of the target. According to the above analysis, based on BIM theory, the paper introduces control calculation and a collaborative optimization algorithm to build a process collaboration model in the construction stage, which can improve the degree of collaboration in the construction stage, promote information interaction and real-time sharing among stakeholders in the construction process; automatic optimization of control objectives; real-time visualization and monitoring of construction process to achieve a high level of humancomputer interaction. The collaborative management model can realize the following functions: (1) Collaborative management of all participants. In the construction collaborative management model, each participant establishes a target collaborative constraint system, simulates the collaborative control of BIM simulation through the control calculation system, and selects the optimal solution set through cooperation, to realize the collaborative management among participants. (2) Multi-objective automatic collaborative optimization. When the progress, cost, and quality of the project deviate in the construction process, the BIM collaborative management platform will send early warning information to all participants, and then start the guidance collaborative optimization simulation module to optimize the multi-objective. (3) It is used for real-time control of the process. Through the real-time monitoring of the site construction process, the construction personnel is guided to monitor the construction dynamic information in the construction process in real-time.
Construction Collaborative Management Method Based on BIM
1597
5 Epilogue With the increase of large-scale and complex engineering projects, the characteristics of various construction activities, tight construction period, numerous stakeholders, and complex cooperation are presented in the construction stage. Therefore, the support of information sharing and collaborative work is urgently needed. In this paper, the control computing and coevolutionary algorithm are introduced into the BIM application to realize the collaborative management in the construction stage. In this paper, a time cost quality collaborative optimization model based on IFC is established, and a multi-objective collaborative optimization algorithm is designed. This method provides real-time visualization and real-time control system for the simulation and Optimization in the construction process. Users can adjust the relevant parameters of the BIM model according to the real-time visual interface. Acknowledgments. Construction system science and technology project of Jiangsu housing and urban-rural development department (source) Research and implementation of information management system of construction engineering laboratory based on MATLAB (name) 2019ZD047 (No.).
References 1. Succar, B., Sher, W., Williams, A.: An integrated approach to BIM competency assessment, acquisition and application. Autom. Constr. 35(11), 174–189 (2013) 2. Li, L., Zhenqing, Y., et al.: Application value of BIM in project management in the construction stage. Build. Technol. 8, 698–700 (2016) 3. Xu, L.: Application of BIM Technology in complex project construction. Nanchang University (2015) 4. Wang, X., Ren, Y., Yang, Q.: International development trend of BIM – based on website content analysis. J. Eng. Manag. 4, 6–11 (2012)
Style Design and Color Analysis of Cloud Shoulder Li Wang(&) Dalian Polytechnic University, Dalian 116000, China [email protected]
Abstract. Cloud shoulder is an important part of traditional national costume art, and its color has national characteristics and cultural heritage. This paper proposes an inheritance technology for cloud shoulder color. First of all, with the help of MATLAB, the composition proportion of hue, brightness, and purity of the cloud shoulder is analyzed, and the color characteristics of the cloud shoulder are extracted. Then, in the pattern generation algorithm based on processing, the color feature is used to color the pattern, and the color pattern is applied to all kinds of modern design fields. This helps to analyze the digital generation and inheritance application of patterns based on the proportion of color composition and provides a new method of inheritance and innovative application of traditional clothing from the perspective of technology. Keywords: Cloud shoulder design
Color features Color inheritance Pattern
1 Introduction No. 10 of the State Council in 2014 In the document “several opinions of the State Council on promoting the integration and development of cultural creativity and design services and related industries”, the basic principles of “cultural heritage, scientific and technological support” are put forward, and the development of a culture based on digital technology and content is taken as the second important content to grasp. It can be seen that the urgent need for the combination of traditional culture and advanced digital technology at the national level can be seen. Cloud shoulder, like a bright pearl in Chinese Han costume art, has been the main carrier of needlework decoration art from the Sui and Tang Dynasties to the Republic of China. Most of them adopt “cloud pattern”, “Ruyi” and other decoration techniques, with gorgeous colors and elegant shapes, covering the front and back of the shoulder, such as clouds, so it is called “cloud shoulder”. In the past, the research always explored two factors of form and color at the same time, less explored the feasibility and effectiveness of cultural inheritance of the “color” single factor, and seldom used the technology of color feature analysis in the field of image processing. Therefore, this paper will use color feature analysis technology to explore the possibility of inheriting the cloud shoulder from a single color perspective [1, 2]. This paper chooses cloud shoulder as the research object of color feature analysis based on three considerations: first, the traditional meaning of the form attached to © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1598–1603, 2021. https://doi.org/10.1007/978-981-33-4572-0_234
Style Design and Color Analysis of Cloud Shoulder
1599
cloud shoulder, which limits the inheritance of cloud shoulder in modern design. Therefore, this paper uses the form and color separation and special research on color inheritance. The second is that color, as a symbolic thing, has become an important carrier of culture. Traditional clothing uses color to stimulate people’s emotional experiences [3].
2 Method Research ideas: with the method of quantitative analysis and algorithm generation, the color characteristics of cloud shoulder are inherited and innovated. Specific steps: 1) data analysis, use the numerical statistical method to analyze the color distribution of cloud shoulder and obtain the characteristic color; 2) based on the known color distribution, and then use the color assignment algorithm of pattern color to generate new color patterns; 3) design and application, innovative use of the generated color patterns in various fields [4]. 2.1
Obtaining the Color and Proportion of Cloud Shoulder
To analyze the method of obtaining the color composition proportion and color value of the cloud shoulder, we need to use a computer program to carry out a color numerical analysis. In this paper, the research of obtaining the color and color proportion of the cloud shoulder is based on the MATLAB platform. Specific steps: 1) Read the image of the cloud shoulder in Matlab platform, and get the size of the image. 2) Converts the cloud shoulder image from RGB color space to HSV color space. Because HSV color space is closer to people’s feeling of color, it can more easily express the formal elements of clothing color: hue, brightness, purity, etc. It is convenient to classify and count data such as color elements. 3) Histogram statistics of the hue H attribute of the image. 4) Filter the histogram data obtained by statistics, filter out the background color of the cloud shoulder image, and some of the less proportion of the noise. The specific method is two steps: the first step is to filter the background color column, which is easy to identify because of its single color and large contrast with the cloud shoulder pattern; the second step is to select the top n color columns, and the total number of N color columns needs to exceed 80% of all color pixels except the background color, to ensure the effective color inheritance. 5) Statistical data frequency histogram, and then draw the pie chart of the histogram. 6) Use the same method to continue to count the s and V attributes of each pixel color in the image, then convert the HSV color space after statistics into RGB color space and color the pie chart in proportion, and finally display the pie chart in RGB color space on the screen, as shown in Fig. 1.
1600
L. Wang
Fig. 1. Cloud shoulder color composition ratio
3 The Generation of Pattern Coloring After the analysis of cloud shoulder color features, the author also uses digital technology to realize the research on how to inherit color features in new patterns. The specific principle is: according to the proportion of colors, each color is assigned a specific range from 0 to 1, and the original pattern after coloring is generated by evenly distributing random values. First, the probability of each color value in the pattern is obtained, and then the color is selected randomly according to the probability of each color. The code of this method is:
10colorArr:Array=[0xe1cdbe,0xb8507d,0x111425,0x7997e,0x7a8260,0x23182]; 20 colorR ateArr:Array =[0.3,0.45,0. 91,0. 95,0.97,1]; 30 var temp:Number = Math. random(); 40 var color:uint = 0; 50 if (temp < color RateArr[0]) color = colorArr[0]; 60 else if(temp < colorRateArr[1]) color =colorArr[1]; 70 else if(temp< colorRateArr[2]) color =colorArr[2]; 80 else if(temp < colorRateArr[3]) color =colorArr[3]; 90 else if(temp < colorRateArr[4]) color =colorArr[4]; 100 else if(temp < colorRateArr[5])color=colorArr[5]; Where, the colorArr:Array Contains six cloud shoulder colors expressed in hex color codes. The main reason for using hex color codes is that one set of values is more convenient than the RGB three sets of values. Colorr in line 20 ateArr:Array Contains six values representing the percentage of the above six groups of colors (mapped to 0– 1). For example, the color range of 0xe1cdbe is 0–0.3; the color range of 0xb8507d is 0.3–0.45. The 30th line of code is to generate a random number of 0–1 in a uniform probability distribution. The 40th line initializes the color, which is used to save the randomly generated color. Lines 50 to 100 are the random number and colorr that will be generated ateArr:Array When it is less than the probability value of the corresponding color, select colorArr:Array The corresponding color in. Finally, the color value in the variable color is the selected color, which is used to color the elements in the pattern. According to the above color assignment method, you can assign colors to the elements in any form of the pattern (the elements in this paper can be many elements, and the elements in this paper are circles drawn by code), and inherit the color scale
Style Design and Color Analysis of Cloud Shoulder
1601
relationship of cloud shoulder. This process can quickly generate modern geometric patterns after the cloud shoulder color quantification. This pattern breaks through the limitations of the traditional cloud shoulder implied patterns, making the cloud shoulder color application more extensive and faster. A kind of original pattern is generated by the above method, and a new pattern design work is generated by the algorithm of lattice filter, mosaic filter, stretch twist filter, twist wave filter, or the combination of multiple algorithms. 3.1
Dynamic Pattern Generation
As the color has a good inheritance in the generated patterns, just like genes, with the growth of patterns, the color feeling remains relatively constant. Therefore, by using the processing software, we can associate the changes of variables and patterns, and generate dynamic patterns with the changes of variables. Processing is a new computer language developed by Casey Reas and Ben fry of MIT Media Lab in the United States. The main service group is the cross-domain group between science and art, which is used for the development of multimedia art and the presentation of massive data visualization. As shown in Fig. 2, the color inheritance method can be used in various creative graphic patterns with the help of digital generation art. As shown in Fig. 3, the whole process of pattern generation is dynamically displayed, so that the inheritance of color produces interesting and good experience in the process of pattern generation.
Fig. 2. Patterns ensure differentiation while maintaining recognition
4 Cloud Shoulder Image Background Separation Firstly, the average L * a * b * color of the image background is determined by measurement (the background color is represented by q, and its color value is (Lc, ac, Bc)), then the main body image of the cloud shoulder is separated according to its background color. Firstly, in CIE L * a * b * color space, the Euclidean distance between the body of the cloud shoulder, and the background color of the image is calculated by the formula (1):
1602
L. Wang
Fig. 3. Interactive pattern generation
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi dðp; qÞ ¼ ðLi Lc Þ2 þ ðai ac Þ2 þ ðbi bc Þ2
ð1Þ
where: p is one pixel in the image, and its color value is (Li, Ai, Bi). The change of color from yellow to blue indicates that the distance valued increases gradually, and the contrast between the main body of the cloud shoulder and the background value L * a * b * increases It can be seen that the Euclidean distance d value can be used as an effective basis for segmentation of objects and background in the tested image. Secondly, the threshold value of image automatic segmentation is determined based on the obtained distanced value by using the Otsu threshold segmentation algorithm, and the main body of the cloud shoulder (white pixel point) and background. 4.1
Cluster Cloud Shoulder Color
The mean shift algorithm proposed by Comaneci et al. Has been widely used in vision field such as image computer processing because of its small data processing calculation, simple process, and easy implementation, and can retain the main information of the tested image. Suppose that the central point of a cycle window is C, the bandwidth of kernel function K (x) is h, and N is the number of sampling points xi(i = 1, 2,…, N, xi 2 X) The kernel function K (x) is used to estimate the probability density of point x, as shown in Eq. (2) pðxÞ ¼
N 1X Kðx xi Þ N i¼1
ð2Þ
The kernel function K (x−xi) is usually composed of a single-valued function or a Gaussian function, where the Gaussian function is shown in formula (3). 1 x x i 2 Kðx xi Þ ¼ c exp 2 h
ð3Þ
where C is the peak value of the corresponding Gaussian curve. The Mean-shift will point to the direction of the densest sample point, and the mean shift vector m (x) will also move to the place where the sample point changes the most compared with point x, thus forming the gradient direction of density change. Then the Mean-shift vector m (x) is
Style Design and Color Analysis of Cloud Shoulder N P
2 i xi g xx h
2 mðxÞ ¼ i¼1N x P 2 i g xx h
1603
ð4Þ
i¼1
In the formula, g(x) = − K′ (x), K′ (x) is the kernel function, so the image mean shift clustering analysis can be obtained based on a formula (4). In CIE L * a * b * color space, the mean shift clustering algorithm is used to segment different pixels of the cloud shoulder, and then the main color of the cloud shoulder image is extracted from the clustering results. Previous studies have shown that the h-value of clustering segmentation bandwidth is an important parameter determining the mean shift iterative process, which will affect the quality of clustering results and segmentation time. In this experiment, the bandwidth H value is temporarily set to 0.05, and the color means of the color clustering results of the cloud shoulder physical image drifts the clustering category label.
5 Conclusions Through the measurement and analysis of the color characteristics of the cloud shoulder, and the quantitative color value is given to all kinds of patterns, to achieve the purpose of inheriting the traditional clothing cloud shoulder from the color field. The method proposed in this paper can not only be used for the color inheritance and innovation of traditional clothing component cloud shoulder but also can be used for the targeted inheritance of other cultural heritage colors. Also, in the past, the inheritance of traditional clothing often considered both the form and color elements, and this study shows that it is feasible to analyze and inherit the single element “color” through technical means. In the follow-up research process, we will focus on the accuracy of color feature extraction, and consider the introduction of theoretical knowledge of color cognitive psychology in the process of feature extraction.
References 1. Pan, D.: The symbol of the color of national costume. Res. Natl. Art 2, 36–43 (2002) 2. Lin, S., Hanrahan, P.: Modeling how people extract color themes from images. In: Proceedings of the SIGCHI 3. Wang, B., Yu, Y., Wong, T., et al.: Datadriven image color theme enhancement. ACM Trans. Graph. 29, 146 (2010) 4. Jiang, B.: The Chinese Dress Culture, p. 21. Guangdong People’s Press, Guangzhou (2009)
The Information Strategy of University Education and Teaching Management in the Era of Cloud Computing and Big Data Jianhua Chen and Huili Dou(&) Jiangsu University of Science and Technology, Zhenjiang 212003, Jiansu, China [email protected], [email protected]
Abstract. This paper focuses on the analysis of the informatization strategy of university education and teaching management in the era of cloud computing and big data. Firstly, it analyzes the current situation of university education and teaching management informatization in the era of cloud computing and big data, analyzes and designs a set of information management system through a collaborative filtering algorithm, realizes parameter setting, and then explains and discusses it from several aspects In the era of cloud computing and big data, the implementation measures of university education and teaching management informatization are designed to provide reference materials for relevant research. Keywords: Cloud computing Big data College education management Collaborative filtering algorithm
Teaching
1 Introduction With the development of economic globalization, science and technology have been developing and innovating with each passing day. At the same time, a distributed recommendation algorithm has become a new research direction in the research of recommendation algorithms. To a certain extent, it puts forward more strict requirements for the information management of education and teaching in Colleges and universities. However, due to the restriction of the past educational thinking and methods, the existing education and teaching activities in Colleges and universities present disadvantages in the form of classroom teaching and personnel training, and there are many problems. In the information age, the emergence of cloud computing and big data age has accelerated the pace of information-based education and teaching management in Colleges and universities in China. Most of the education and teaching management in Colleges and universities have been further improved to achieve the information-based goals of education mode, teaching resource development, teacher team development, and student management, highlighting the management effect of university management, which is conducive to the sustainability of colleges and universities Progress and development [1]. Therefore, in the era of cloud computing and big data, how to manage education and teaching information has become a hot issue in society. It is the only way to realize
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1604–1609, 2021. https://doi.org/10.1007/978-981-33-4572-0_235
The Information Strategy of University Education and Teaching Management
1605
the sustainable and healthy development of higher education to promote the informationization and conscientization of teaching management.
2 The Current Situation of University Education and Teaching Management Informatization in the Era of Cloud Computing and Big Data 1. The basic conditions for the informatization of education and teaching management in colleges and universities. First of all, we should make sure that the informatization of university education and teaching management is not a superficial application of cloud computing and big data technology, nor limited to the collection and preservation of data information. Combined with the trend of education reform, the modern education system is in the construction stage, and the country is improving the information-based process of education and teaching management to develop into a powerful human resource country. The leaders of colleges and universities should attach importance to the objective needs of the future development of colleges and universities, regarding the collection, collation, preservation, and use of data and information involved in the operation of colleges and universities as the core work, create an information-sharing mode within colleges and universities, and on this basis, scientifically allocate the existing resources of education and teaching, tap the flashpoints of students, ensure the healthy and happy growth of students, and skillfully combine with the new era economy With the development of economy and the informationization of education and teaching in Colleges and universities, excellent practical talents are cultivated. 2. Characteristics of information management in higher education and teaching (1) Fragmentation. Cloud computing is mainly based on Internet-related service increase and delivery mode, which often requires the Internet to provide dynamic and virtual resources. Big data focus refers to the use of software tools to capture and manage data collection in a specific time, which needs to be built on a new processing mode to give full play to the decision-making power and insight of big data software, and the clever use of cloud computing technology and big data technology can meet the needs of Education and teaching management. Therefore, in the actual process of education and teaching management in Colleges and universities, the data generated is somewhat fragmented. If there is a lack of scientific information processing methods, it will inevitably affect the development process of education and teaching information in Colleges and universities, so that the increase of information management costs and other phenomena will occur, reducing the management efficiency of education and teaching in Colleges and universities. (2) Multidimensional. The education and teaching management project carried out in Colleges and universities presents multi-dimensional characteristics in the information age. Because of the diversity of student groups in Colleges and universities, there are differences in the management and operation modes of colleges and universities, including the situation of students borrowing books and the actual use of student
1606
J. Chen and H. Dou
cards. It shows the actual learning and living conditions of students in a multidimensional way, so the dynamic education and teaching Learning management are very necessary, and it is also the development trend of education and teaching management in Colleges and universities, and the basis for the healthy growth of students. (3) Continuity. The existing education and teaching in Colleges and universities in our country bear the dual responsibility of scientific research and education and teaching, which makes the daily information management in Colleges and universities present certain continuity. To ensure the further development of education and teaching management information, university managers should acquire the data information in real-time and continuously, monitor the education and teaching management work dynamically, and achieve the goal of information management [2]. 3. At present, there are some drawbacks to the informatization of education and teaching management in Colleges and universities. At present, the teaching system and management information system that many universities are configuring is divided into two types: (1) Colleges and universities can combine the essential needs of education and teaching, implement the creation of a digital system for software companies to meet the needs of scientific research, student management, and teacher training in Colleges and universities. However, the corresponding system creation has the disadvantages of personalization, lack of pertinence, and can not implement the education and teaching management work carried out by colleges and universities. (2) According to the existing information technology methods and the development needs of colleges and universities, colleges and universities scientifically create the use plan of the system to promote the matching between the software system and the actual needs. However, in the process of software development, there is a relatively strict demand for the technical methods and the quality of personnel themselves, which is also because of this phenomenon, the information work of education and teaching management carried out by colleges and universities is still in existence In the aspect of the software, it can not guarantee the practicability of software. Also, under the restriction of the past education and teaching management means, the information management system has not been effectively established, which is not conducive to the healthy development of colleges and universities in the future.
3 The Specific Problems to Be Solved in the Informatization of Teaching Management in Colleges and Universities 1. The information construction mechanism is not comprehensive enough. In the process of informatization construction of education and teaching management, the top-level design will affect the future development of informatization [3]. Scientific construction plays an important role in the informatization construction of education management, not only the existing rules and regulations of colleges and universities but also the mutual integration of human resources and other conditions with the informatization construction at this stage. At present, the organizational structure of
The Information Strategy of University Education and Teaching Management
1607
educational and teaching management informatization in Colleges and universities is not balanced enough. The modern education center is often responsible for the implementation of educational informatization. Some management organizations set up in Colleges and universities are responsible for many construction works, which are popularized and promoted by the technical center. The network center is related to the information infrastructure construction, while the information center is related to the practical application of the information construction. Some universities with advanced information technology set up corresponding construction institutions, including the information construction office. However, in the context of the big data era, there are some changes in the institutional responsibility orientation of teaching and learning management information, mainly the information construction in practice In the process of promotion, there is the phenomenon of function cross, which leads to data redundancy, which is not conducive to the effective development of education and teaching management information. 2. The level of information technology is low. With the in-depth development of information technology, a variety of network technology and information construction equipment have existed in every university, and have been popularized to a certain extent. Many universities have built high-standard multimedia classrooms and laboratories for installation and monitoring, which have the conditions for information management of scientific research and education and teaching. However, some software systems developed by colleges and universities are only developed according to the needs of their departments. The system platform used by teachers, students, and user groups is not stable, the user experience of teachers and students is poor, the level of information technology is low, and the gap between the current mainstream Internet Design software products is far, which is not conducive to the upgrading and maintenance of software.
4 Collaborative Filtering Algorithm A collaborative filtering algorithm is widely used in the management system platform, which mainly based on past behavior records, recommends the user group to meet their behavior preferences. According to the historical behavior preference I-U matrix automatically constructed by the algorithm, the behavior selection of n-neighborhood users with i-neighborhood is deduced, and the similarity degree based on the user group and all behaviors of the project is deduced by calculating the similarity degree of the cosine coefficient, coefficient, and other algorithms. The following is the calculation method used in this paper when calculating the coefficient [4, 5]:
S Ii; Ij
PI Þ Pjk PJ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 P 2P k2U ij ðPik PI Þ k2U ij Pjk PI P
k2U ij ðPik
ð1Þ
Among them, S I i ; I j indicates the similarity between i and j of sub-projects in the general system platform. U ij is the intersection of historical behavior preference for I i
1608
J. Chen and H. Dou
and historical behavior preference. Pi represents the average preference of the project. The user group's preference prediction formula for the final recommendation result of a project I am as follows: P Pim ¼ PI þ
k2I
Sði; kÞ Pjk PJ P k2I Sði; k Þ
ð2Þ
Then, according to the deduced prediction preference value, the filter is carried out, and finally, the optimal choice is formed.
5 Cloud Computing and the Implementation Measures of University Education and Teaching Management Informatization in the Era of Big Data 1. Improve the network environment and management mechanism in Colleges and Universities University leaders should actively improve the internal network environment, improve the effect of university network application, try to reduce the probability of lack of network resources use, extend the source range of education and teaching management information data, reasonably configure the network, strengthen the optimization of network operation environment, and reflect the effectiveness of university education and teaching management information. The standardized training and education of technical personnel, the establishment of a network security management system, and the timely control of viruses and hackers in the network show the value of the cloud computing and big data era to the education and teaching management of colleges and universities. In essence, the informatization construction of education and teaching in Colleges and universities is a huge project. On the one hand, it needs the support of technical institutions, on the other hand, it needs the friendly cooperation between each management institution and colleges and universities. Colleges and universities can organize the informatization network environment management group, mobilize the enthusiasm of leaders and teachers and students in Colleges and universities, and make them actively participate in the informatization management construction; or create a letter The information construction and management unit comprehensively serves the information mechanism and resource integration and strengthens the quality of information construction of teaching management in the form of the hierarchical network. 2. Integrate the information infrastructure of higher education and teaching From the perspective of cloud computing and big data, university leaders should base on the level of hardware and software, increase investment, and improve the existing educational and teaching information management equipment. Leaders of colleges and universities should always pay attention to the information needs of education and teaching, find out the disadvantages of information management, and
The Information Strategy of University Education and Teaching Management
1609
deal with them with scientific means to upgrade the information work of education and teaching management. In the process of using cloud computing and big data technology, colleges and universities should ensure the security of users and create an efficient information technology use mechanism. Based on the practical functions of the two technologies, the technologists should evaluate the risks of education and teaching management informatization, disinfect and kill the data information with security risks, strengthen security certification, and create development space for the scientific application of cloud computing and big data technology.
6 Conclusions To sum up, under the background of cloud computing and the big data era, the information management of education and teaching carried out by colleges and universities has very important practical significance and value. Colleges and universities should fully understand the effect of cloud computing and the big data era on education and teaching management, formulate effective education and teaching information management programs and mechanisms, and integrate information into education and teaching management. To realize the sustainable and healthy development of higher education, we should promote the informatization and conscientization of higher education teaching management, and constantly improve the informatization level of education teaching management.
References 1. Wang, S., Ju, W., Wang, Y.: On the impact of big data and cloud computing technology on the construction of university student management informatization. China Manage. Informatization (2018) 2. Li, X.: On the information management of University Archives in the era of big data. Office Busin. (23), 60 (2017) 3. Qin, W.: Information construction of Internet of things based on cloud computing network environment and big data. Laser J. (5) (2018) 4. Rong, H., Huo, S., Hu, C., et al.: Collaborative filtering recommendation algorithm based on user similarity. J. Commun. (2), 16-24 (2014) 5. Li, Z.: Discussing cloud computing and big data to promote the integration of financial management information in Colleges and universities. Adm. Assets Finan. 09, 34–35 (2018)
Real Estate Investment Estimation Based on BP Neural Network Yuhong Chen1, Baojian Cui2, and Xiaochun Sun3(&) 1
Inner Mongolia University of Technology, Inner Mongolia, Hohhot 010051, China [email protected] 2 Inner Mongolia Business & Trade Vocational College, Inner Mongolia, Hohhot 010051, China [email protected] 3 Inner Mongolia University of Finance and Economics, Inner Mongolia, Huhhot 010070, China [email protected]
Abstract. Investment estimation is an important link in the feasibility study stage of residential projects. In this paper, a genetic algorithm is used to improve the BP neural network to estimate the construction and installation cost of residential construction project investment. Taking the project data of a construction company as an example, a genetic BP neural network model is constructed. Through the detection of the actual data, the error is within 10%, which meets the actual investment estimation needs. Keywords: BP neural network project introduction
Genetic algorithm Housing construction
1 The Basic Principle of BP Neural Network A neural network is a new method of information processing, which consists of a large number of. It is also a complex network system formed by the extensive interconnection of simple neurons. It is proposed based on modern neuroscience research results [2]. It is a large-scale parallel connection mechanism system simulating the human brain structure, with adaptive modeling learning function. Yes. Among all kinds of neural network models, BP neural network model has better self-learning and self-association functions. The standard BP neural network model consists of three kinds of neuron layers, the bottom layer is called the input layer, the middle layer is hidden, and the top layer is the output layer. Neurons in each layer form a complete connection, and neurons in each layer have no connection. The learning process of the BP algorithm is composed of forwarding propagation and backpropagation. In the process of forwarding propagation, input information is transmitted and processed from the input layer to the hidden layer, and the state of each layer only affects the state of the next layer [1]. If the desired output cannot be obtained at the output layer, the error signal will be returned along the original connection path. By modifying the value of the connection weight between layers, the error signal will © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1610–1615, 2021. https://doi.org/10.1007/978-981-33-4572-0_236
Real Estate Investment Estimation Based on BP Neural Network
1611
be transmitted to the input layer one by one, and then through the forward propagation process, the repeated use of these two processes will reduce the error until the requirements are met. See Fig. 1 for the specific structure [2].
Fig. 1. General topology of BP network
2 Limitations of the BP Algorithm Although the BP algorithm has been widely used in the neural network, it also has its limitations and shortcomings. The main problems are as follows: (1) BP algorithm is a kind of error backpropagation algorithm, which is based on the direction of error function gradient descent. Its process is to modify the weights and thresholds of the neural network layer by layer from the output layer through the middle layer to the input layer. The algorithm is essentially a local search algorithm and does not have the ability of a global search. (2) The graph of global error function e of BP neural network is a multi-dimensional surface, just like a bowl whose bottom is the minimum value point. However, because the surface of the bowl is uneven, the BP algorithm may sink into a small valley area (i.e. local minimum point) in the process of training, and the change of the point in all directions will increase the error, making the training unable to jump out of the local minimum [3].
3 Characteristics of Genetic Algorithm Genetic algorithm (GA) originated from the computer simulation of the biological system. In the 1960s, Professor Holland and his students at Michigan University in the United States, inspired by the biological simulation technology, created an adaptive probability optimization technology, genetic algorithm, based on the mechanism of biological genetics and evolution, which is suitable for complex system optimization.
1612
Y. Chen et al.
The genetic algorithm maps the solution space of the problem to the genetic space, that is, every possible solution is encoded as a vector (called a chromosome or an individual), and every element of the chromosome is called a gene. At the beginning of the algorithm, some chromosomes are randomly generated, and their fitness is calculated. Then, according to the fitness, genetic operations such as selection, exchange, variation, and so on are carried out for each chromosome, and the chromosomes with low fitness are removed, leaving the chromosomes with high fitness, to get a new population. In this way, the genetic algorithm iterates repeatedly and evolves towards a better solution until it meets a predetermined optimization index and obtains the optimal solution to the problem. The genetic algorithm of the flow chart is shown in Fig. 2.
Fig. 2. Operation flow of the genetic algorithm
4 Design of Genetic BP Hybrid Algorithm Because the genetic algorithm has the ability of global optimization, and the BP algorithm can carry out an accurate local search, the hybrid genetic BP algorithm designed for the characteristics of the two algorithms can be used for the training of neural network weights and thresholds. The design flow of the genetic BP hybrid algorithm is shown in Fig. 3, which is mainly divided into two steps: (1) optimize the initial weight and threshold with a genetic algorithm, locate an excellent search space in the solution space; (2) search the solution space located by genetic algorithm with BP algorithm, and finally get the required weight and threshold. BP algorithm is easy to make the neural network fall into a local minimum. The key to this problem is that the initial training parameters of the BP algorithm are randomly
Real Estate Investment Estimation Based on BP Neural Network
1613
Fig. 3. Design flow of genetic BP hybrid algorithm
given. The hybrid algorithm of the genetic algorithm and the BP algorithm is used to train the neural network, which can overcome the above problems.
5 Application of Genetic BP Neural Network in Investment Estimation of Housing Construction Project The basic principle of investment estimation and analysis of residential construction projects is based on the similarity of residential construction projects. For a certain estimated project, first of all, it starts with the analysis of building types and project characteristics; then it finds out several projects that are most similar to the projects to be built from a large number of similar completed projects; then it uses the cost data of these similar projects as the original data for reasoning; finally, it obtains the investment estimation and other relevant data of the proposed project. In this paper, taking the investment cost estimation of a residential project of a construction company in Beijing as an example, through collecting the data of the completed construction projects in recent years, 14 typical project data are selected as the learning sample set, and a genetic BP neural network investment estimation model is established. (1) A quantitative description of project characteristic factors Project characteristics refer to the important factors that can represent the project characteristics and reflect the main cost composition of the project. The selection of project features shall refer to the statistics and analysis of existing project data and be determined according to the experience of experts. (2) Establishment of investment estimation model based on Genetic BP neural network. In this model, the three-layer BP network model is adopted, the sigmoid function is selected as the activation function of hidden layer nodes, and the linear function is
1614
Y. Chen et al.
selected as the activation function of output layer nodes. The input layer units of the model are 6, which represent the feature vector of the project, including building area, structure type, number of floors, floor height, number of households, and decoration type. The output floor is the construction and installation cost per square meter. The initial weights and thresholds are optimized according to the genetic algorithm. Because the structure of the BP neural network is a fully connected structure, the number of network weights is 105ð6 15 þ 15 1Þ, and the number of thresholds is 16ð15 þ 1Þ. The weight of the neural network and threshold are encoded as follows: Among them, the input nodes of the neural network are I, the hidden layer nodes are j, and the input nodes are I I k outgoing nodes. Weight matrix between the input layer and the hidden layer 2
w11 6 w21 6 w ¼ 6 .. 4 .
w12 w22 .. .
wj1
wj2
3 w1j 7 w21 7 .. 7 .. 5 . . wjk
Weight matrix between the hidden layer and output layer 2
v11 6 v21 6 v ¼ 6 .. 4 .
v12 v22 .. .
vj1
vj2
.. .
3 v1j 7 v21 7 .. 7 5 . vjk
Threshold matrix of hidden layer 2
3 b1 6 b2 7 6 7 b ¼ 6 .. 7 4 . 5 bj Threshold matrix of the output layer 2
3 t1 6 t2 7 6 7 t ¼ 6 .. 7 4.5 tk In the process of searching, the genetic algorithm does not use external information, only based on the fitness function, the high fitness function is more likely to be inherited to the next generation, and the low fitness function is less likely to be inherited to the next generation. Therefore, the selection of fitness function has a great influence on whether the genetic algorithm can finally converge to the optimal solution and the convergence speed. Because the individual in this paper is a combination of
Real Estate Investment Estimation Based on BP Neural Network
1615
weights and thresholds, to judge the merits of the individual, only based on the weights and thresholds, calculate the error e between the actual output of the network and the real value. If e is large, it means that the individual can not accurately reflect the relationship between input and output, and maybe eliminated, that is, the fitness is low. On the contrary, adaptability is higher. So in this paper, the definition of individual fitness function of a genetic algorithm is: F ¼ E1 . P P 2 Among: E ¼ nK¼1 pj¼1 ðykj okj Þ (where n is the number of training samples and p is the number of output nodes ykj okj It indicates the error of the k-th sample concerning the j -th output unit). (3) Analysis of test results. The convergence network is used to test the data of the 13th and 14th groups. The estimated cost of construction and installation is 3307.1 and 2079.4 yuan per square meter, respectively. The relative error between the actual value and the predicted value is less than 10%. From the test results, the overall error ratio is small, which can meet the investment estimation needs of the feasibility study stage of residential construction projects. This shows that the generalization ability of the model is better and the estimation model is more successful.
6 Conclusions The nonlinear mapping ability of the BP neural network and its approximation ability to any function make it gradually receive attention in the research of economic modeling. By improving BP neural network with a genetic algorithm, it can avoid falling into local minimum value. At the same time, with high convergence speed, this paper uses a genetic BP neural network to automatically extract the regular relationship between housing project characteristics and cost estimation from a large number of past housing construction project estimation data and establishes a genetic BP neural network model for housing construction project investment estimation.
References 1. Su, X.: Master MATLAB6.0 and its Engineering Application. Beijing: Science Press (2002) 2. Shuang, C.: Neural Network Theory and Application for Matlab Toolbox. China University of Science and Technology Press, Beijing (2003) 3. Lei, Y., Zhang, S.: Matlab genetic algorithm toolbox and its application. Xi'an: Xi'an University of Electronic Science and Technology Press (2005)
An Analysis of the Construction of Teaching Evaluation Model Under the Framework of Web Taking Educational Psychology as an Example Yazhuo Fu(&) Xi’an Peihua University, Xi’an 710125, Shaanx, China [email protected]
Abstract. The construction and application of task-driven teaching mode, with the periodic task evaluation as a supplement, forms a direct evaluation of the teaching effect of educational psychology in Colleges and universities, and provides objective and reliable effect information. The core of the whole task evaluation is to analyze the results of psychology teaching in Colleges and universities, to effectively establish the influencing factors. Keywords: Educational psychology in colleges and universities Task driven Teaching effect Application analysis
1 Introduction According to different teaching stages, with the deepening of teaching activities, the difficulty and complexity of teaching tasks are gradually deepened, and the embodiment of teaching effect needs a specific evaluation method as the carrier to evaluate the phased teaching results. In the teaching process of educational psychology in Colleges and universities, the task-driven teaching mode takes the phased task evaluation as an important factor of the model composition, to show the definiteness of the teaching task, form an effective evaluation of the completion of the teaching task, and reflect the active and comprehensive phased and systematic evaluation of the teaching organization process and teaching development methods. This is a supplement to the teaching development thought of educational psychology courses in Colleges and universities, which better reflects the authenticity of teaching tasks, fully displays the driving function of teaching mode, and provides a perfect source of evaluation information for the actual teaching situation [1–3].
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2020 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1616–1621, 2020. https://doi.org/10.1007/978-981-33-4572-0_237
An Analysis of the Construction of Teaching Evaluation Model
1617
2 Take the Overall Task Evaluation as the Core, and Analyze the Teaching Results of Psychology in Colleges and Universities Based on the integrity of teaching tasks, the objective evaluation of the development of teaching activities of the psychological discipline of higher education should be formed for the teaching achievements of higher education. Firstly, the authenticity of teaching tasks, the objectivity of teaching difficulty, and the repeatability of the teaching process should be made clear. The authenticity of teaching tasks reflects that in the process of teaching, teaching pressure is high and the effect of students’ psychological formation and transformation is high. The objectivity of teaching difficulty indicates that the forming process of students’ correct psychology is complex and the teaching task is heavy. The repetition of the teaching process points out that the teaching process should be gradual and the periodicity of teaching time should be strictly controlled [3, 4]. Because of the teaching task, the teaching difficulty, and the teaching process, the comprehensive evaluation is carried out, and the basic nature of the information subject teaching is used to fully reflect the teaching results, to specifically analyze the status of the completion of the teaching task, the accurate evaluation of the teaching difficulty and the value of the teaching process, and to draw the conclusion that whether the completion effect of the teaching task is consistent with the teaching goal. To form the basic characteristics of the task-driven teaching mode, to provide help for testing the teaching effect of the psychological discipline of higher education, to achieve the overall task-based evaluation objectives and standards, and to promote the educational heart of Higher Education the determination of the overall development direction of science is constantly enhanced, and the teaching task is integrated the evaluation function is maximized [5–7].
3 Focusing on the Analysis of Objective Tasks, Grasp the Direction of Psychology Teaching in Colleges and Universities as a Whole Take the teaching quality as the goal, comprehensively strengthen the satisfaction of psychological teaching results in Colleges and Universities. The quality of teaching determines the teaching results, which has an intuitive effect on the completion of teaching tasks. From the perspective of teaching quality, the application of task-driven teaching mode of psychological discipline in higher education puts forward specific requirements for teaching methods, teaching approaches, and the construction of the teaching atmosphere. It takes students’ cognitive psychological coordination as the basic goal of teaching, fully explains the fundamental significance of students’ mental health, and achieves the core purpose of teaching quality “J”. The excellent degree of teaching results reflects the completion of teaching tasks from the side, and whether the guidance of students’ healthy psychology can meet the requirements of the objectives. The task-driven teaching mode effectively excavates its driving force following the objective principle of teaching tasks, to achieve the basic
1618
Y. Fu
purpose of students’ satisfaction, teachers’ satisfaction, and social satisfaction, and to explore the science and education of sex education psychology Many teaching modes, such as learning mode, cooperative education psychology teaching and so on, provide the task basis. Based on the authenticity of the task, this paper objectively expounds the necessity of the research on the teaching methods, teaching models and teaching approaches of the discipline of educational psychology in Colleges and universities, to promote education (Fig. 1).
Fig. 1. Diversified development measures of Psychology Teaching
Psychological teaching research and teaching development can be unified with the psychological demands of college students, thus providing a positive impetus for the continuous improvement of the satisfaction of their teaching results. Based on the long-term goal, analyze the overall psychology teaching in Colleges and Universities’ direction. To set up the long-term development goal and teach the subject as a whole. The trend of learning has a guiding role, and the teaching purpose and overall teaching plan are more specific, It is of sustainable development significance. From the difficulty of the teaching task of the psychological subject in Higher Education. Starting from this point, the cognitive range of correct mental health of college students of Educational Psychology. It is difficult for teaching to measure the degree of mental health effectively. The coefficient is relatively high. Combining these two aspects, in teaching means, to explore. Teaching and situational teaching as the main body, the integration of teaching content is comprehensive and deep. Also, it provides for the continuous enhancement of the psychological adaptability of the students of educational psychology in Colleges and universities. On the premise. By constantly broadening the perspective of psychological research, and constantly infiltrating college education. Education psychology, science, students’ mental health and prevention of mental diseases. The learning process can achieve the basic teaching purpose based on students’ psychological cognition P.
An Analysis of the Construction of Teaching Evaluation Model
1619
This is the core of long-term goal setting, and it also promotes the science and education of educational psychology in Colleges and universities. Learning can form the key to sustainable circular development. Students in the teaching process. Comprehensively investigate psychological factors and effectively drive the authenticity of teaching tasks. To keep the direction of psychology teaching highly clear, the deep excavation of strength. At the same time, it also plays a leading role in the research of teaching content and teaching methods.
4 The Analysis of the Key Points and Difficulties, and the Embodiment of the Teaching Ability and Value of Psychology in Colleges and Universities With the cultivation of emotional value as the key point, college psychology helps students to set up the emotional target correctly. The cultivation of the emotional value of college education psychology is the key point of education psychology teaching, which forms an effective section for students’ mental health. The goal of emotion is to form an accurate understanding of self-psychology in educational psychology, to lay a solid foundation for students to express correct psychological feelings, which plays a decisive role in the overall process and trend of educational psychology teaching. Taskdriven teaching mode, through the complexity of teaching difficulty, comprehensively improves students’ emotional values, takes the purpose of teaching tasks as an opportunity to effectively decompose teaching tasks, to actively analyze students’ emotional values, and through experiential teaching organization form, So that students can clearly understand what is emotional psychology to continuously present the effective establishment of emotional value, which has a purifying effect on students’ psychological development, can finally form the application of task-driven teaching mode of educational psychology, and gradually reduce the establishment of students’ emotional goals, To give full play to the application of wing action teaching mode of educational psychology in Colleges and universities to the greatest extent. It is the instinct of each individual’s psychological development to actively weaken the role of College Psychology on the students’ psychological pattern, which is fostered by daily psychological habits, with the transformation of psychological orientation as the difficulty. According to the general law of the formation of psychological stereotype and from psychological orientation, the teaching mode of Task-based mental discipline in Colleges and universities actively weakens the role of students’ psychological stereotype, promotes the formation of students’ social psychology, value psychology, and target psychology to be more accurate, and provides the impetus for the formation of students’ correct social development psychology and social cognitive psychology. Psychological orientation is a teaching difficulty in the teaching of educational psychology in Colleges and universities. It should be effectively applied in the task-driven teaching mode. With the task-based difficulty as the driving means, it can form the
1620
Y. Fu
recurrent effect of the teaching cycle, so that students’ psychological stereotypes can be constantly suppressed. Combined with the research-based and inquiry-based teaching organization forms, it can effectively inhibit students’ psychological stereotypes at the same time, to promote the students to have correct cognitive psychology in the teaching process, to keep the value orientation flexible, and to have a positive impact on the change of their psychological cognitive direction, to further enhance the practical value of Task-based education model in teaching activities. With the cultivation of value psychology as the key, the function of college psychology to guide the task-driven teaching mode of students’ social value psychology accurately is mainly to dig deep evidence for the teaching value, to provide a positive role in promoting the continuous development of students’ social value psychology, and to carry out specific research on the role of educational psychology in the transformation of social consciousness psychology, To provide a guarantee for the accuracy of the formation of students’ social value psychology. The task-driven teaching mode aims at the complexity of the forming process of cognitive psychology and the hindrance of the transformation process. It is designed to cultivate the students’ value psychology, guide the students’ values and outlook on life scientifically in the result teaching, and promote the students to actively regulate the formation of their social development psychology. Through the correct formation of the value of social development, we can deeply understand the social development function of educational psychology, to continuously deepen students’ understanding of mental health and further enhance their mental health judgment ability. In this way, while constantly consolidating the key points in the teaching of educational psychology in Colleges and universities, it has also formed an effective breakthrough in the key points of teaching and correctly trained the formation of psychological and emotional factors of middle school students in the teaching process. With the establishment of objective psychology as a supplement, colleges and universities help students fully understand that the key to the establishment of the teaching direction of psychology in Colleges and universities is to achieve the objective psychology, so that teaching activities have a strong sense of belonging, and have corresponding psychological implications for students. Task-driven teaching mode focuses on the authenticity of teaching tasks and promotes the formation of teaching programs and teaching strategies. The establishment of goal psychology is to maximize the role of students, teachers, and environmental factors in the teaching process according to the actual situation of teaching objectives and teaching tasks. Teaching means and teaching organization forms are helpful for students to have the correct cognition of mental health and can guide students to achieve the goal of curriculum effectively. This is the characteristic of the effective combination of psychology teaching in higher education psychology. It is the key to carry out effective teaching activities. It is the concrete expression of scientific optimization of the content arrangement, teaching setting, and other aspects of educational psychology. It has a positive impact on the establishment of the overall goal of the development of educational psychology, and directly acts on the teaching effect. It is the key to the development of educational psychology. A clear scientific basis for students’ life development goals has laid a solid foundation for expanding the advanced nature of teaching concepts and the diversification
An Analysis of the Construction of Teaching Evaluation Model
1621
of teaching methods. The teaching characteristics of educational psychology in Colleges and universities are to effectively stimulate the formation of students’ objective psychology, establish students’ emotional and value goals, and comprehensively cultivate students’ effective judgment ability of mental health, to enable students to continuously deepen their cognition of social psychology and value psychology. The task-based teaching mode mainly combines the above requirements to cultivate students’ ability and consciousness in an all-round way, highlights the driving role of task authenticity of educational psychology in Colleges and universities, and provides the external impetus for the continuous strengthening of teaching effect, so that its application value can be further exerted so that the teaching mode can play a positive role in promoting the development of educational psychology in Colleges and universities.
References 1. Bo, J.: The dilemma and way out of the construction of the psychology curriculum in Vocational Education. Vocat. Educ. Ind. Tech. Educ. 2014(32), 80–83 (2014) 2. Can, L.: The new thinking of the development of educational psychology in the postmodernism period Research. Heilongjiang High. Educ. Res. 2015(3), 114–116 (2015) 3. Wu, J.: Approaches to effective teaching methods and design Paul Kirschner, an internationally renowned professor of educational psychology. Open Educ. Res. 5, 4–11 (2013) 4. Chen, Y., Qin, A., Wu, J.: Theory of contemporary educational psychological learning style and Model. J. Inner Mongolia Normal Univ. Educ. Sci. Edition 2, 38–40 (2013) 5. Bai, J.: Quality problems of college teaching materials and Solutions Li Xue, for example. Hebei Xue -? LJ, 2014(6), 238–241 (2014) 6. Tang, Y.: The advantage and development of “micro class” – based on the view of educational Psychology Jiao. J. South China Normal Univ. Soc. Sci. Edition 6, 84 (2014) 7. Mei, Y.: Research on the development of educational psychology of self-management. Inner Mongolia Normal Univ. J. Fan Univ. Educ. Sci. Edition 4, 54–56 (2013)
Design and Implementation of Automatic Evaluation System in College English Writing Teaching Based on ASP.Net Guo Jianliang(&) Nanchang Institute of Technology, Nanchang 330044, China [email protected]
Abstract. The development and application of an automatic evaluation system in College English writing teaching is conducive to the construction of online courses and the reform of teaching methods and means. This paper designs the framework and function of an automatic evaluation system in College English writing teaching establish the database of the system and finally realizes the automatic evaluation system in College English writing teaching by using asp. net, SQL Server, IIS, CSS, jQuery, and other technologies. The system mainly realizes the upload of teachers' courseware and teaching documents, students; independent learning, after class communication, homework submission, question answering, and testing functions, which can provide a reference for the construction of open teaching platforms in schools. Keywords: Automatic evaluation system in College English writing teaching Asp.net Open teaching
1 Introduction At present, most of the automatic evaluation systems in College English writing teaching are school-oriented, with powerful features, but the configuration is not flexible. The goal of this system is to build a network-based teaching platform for teachers and students in a certain department. Teachers can put curriculum resources (including curriculum courseware and curriculum video) on the network teaching platform through the function of this system, arrange and grade assignments, publish announcements, and answer questions. Students can complete their learning through the system, watch teaching videos online, download and upload homework, leave messages to teachers, students can learn online, and get rid of the limitations of traditional classroom teaching. The system is easy to deploy and flexible to use. It has realized the modern teaching idea of taking students as the main teacher as the leading role, changed the traditional teaching model of taking teachers as the center, teachers; speaking, students; listening, memorizing and memorizing, and strengthened the information exchange between teachers and students, students and students. To realize online teaching and improve teaching efficiency, the teaching website should give full play to its role and significance, and promote the construction and development of
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1622–1626, 2021. https://doi.org/10.1007/978-981-33-4572-0_238
Design and Implementation of Automatic Evaluation System
1623
curriculum teaching to provide teachers and students with the function of an automatic evaluation system in College English Writing Teaching Based on B/S mode [1, 2].
2 Architecture and Function of the System 2.1
System Architecture
In College English writing teaching, the automatic evaluation system needs to realize online learning and different deployment modes for teachers and students. Therefore, the system architecture adopts B/S mode. The whole architecture is divided into three layers: the user interface layer, the business logic layer, and the data layer. As shown in Fig. 1.
Fig. 1. System structure
The first layer is the user interface layer. This layer is the interface between the user and the whole system. The browser transforms the user program into a page for humancomputer interaction. The information entered by the user on the page is submitted to the background database through the logic layer. After the background database responds, the result is fed back to the user through the logic layer, and the feedback information is displayed on the page. The users of this system are mainly divided into administrators, teachers, and students [3]. Administrators can carry out background management, modify or add teacher members, courses, and issue notices. Teachers can view all courses taught, introduce relevant content of courses, upload courseware materials, grade students; homework, reply to students; messages, publish announcements about the course, etc. Students can view the introduction of the course, browse the courseware, watch the teaching video, download the assignments assigned by the teacher, submit the assignments, and view the scores. The second layer is the business logic layer. It is the middle layer in the three-tier structure, which plays an important role in connecting the preceding and the following. This layer mainly realizes all
1624
G. Jianliang
business logic functions of the whole system, and responds and processes the data fed back from the user interface layer, which is the core function of the whole system. The third layer is the data layer. SQL Server database is used in this system. The data layer can complete the interactive functions between the system and the database, such as data query, update, insert, and delete. All the data used in this system are stored here. 2.2
Functions of the System
According to the purpose and structure of the system, the function of the automatic evaluation system in College English writing teaching is divided into two modules: the foreground display module and the background management module. As shown in Fig. 2. The front display module mainly includes course introduction, teaching team browsing, teaching resources display, assignment management, troubleshooting, and other columns. The background management module mainly includes course introduction management, course resource management, user management, and other management functions.
Fig. 2. Function diagram of automatic evaluation system in College English writing teaching
Introduction to the basic course of the foreground display system. Inquiry and browse of teaching team, display of teaching resources, assignment management, and troubleshooting. In the foreground display module, the course introduction is used to introduce the history, content, syllabus, and teaching methods of the course. Teaching team browsing is used to introduce the personal situation of the teacher so that students have a general understanding of the teacher. The display of teaching resources is used to consult courseware teaching plans, exercise databases, teaching videos, and so on. Job management has different functions for users. For students, it can be used to download, submit, and view excellent assignments. For teachers can be used to assign homework, grade homework, select excellent homework. Troubleshooting provides a
Design and Implementation of Automatic Evaluation System
1625
platform for answering questions. It is used for students to ask questions and teachers to answer questions, which promotes the communication between teachers and students. The background management system is only open to administrators and teachers to realize the management of the whole teaching system.
3 Construction of System Database The database of the automatic evaluation system in College English writing teaching is the storage space of teaching content. Database plays an important role in the construction of the teaching system. Considering the storage of data and other factors, the database used in the system is Microsoft SQL Server 2008. Microsoft SQL server is a medium-sized database based on the server-side, which is suitable for the application of large-scale data. It has powerful functions in the efficiency of processing large-scale data, flexibility, and scalability of background development, etc. The tables involved in the database of this system mainly include curriculum, including fields such as EC_ID (curriculum number), EC_name (curriculum name), EC_year (opening time), EC_class (opening class), EC_JC (teaching materials used); teacher table, including main words: t_Id (teacher number), t_name (teacher name), t_sex), t_ZC (teacher title), etc. Student information table, including the main fields: u_ID (student number), name (student name), sex (gender), CLass_ID (class), etc. The relationship between the main tables can be shown in Fig. 3. In addition to storing students, teachers, courses, and other tables, PPT files and video files are also needed to be stored in the system.Ppt files are uploaded by teachers from the background management system, stored in the form of files, and then converted into HTML format through a defined conversion function processing, for the front display system web page to view. Teaching video is directly stored in the system, only supporting RMVB format video temporarily, and playing a video through a real player playback box.
4 Implementation of the System According to the overall functional framework of the system, the system design mode adopts the current popular B/S three-tier architecture design mode, which divides the system into the data access layer (dao), business logic layer (biz), and the presentation layer (UI). The automatic evaluation system in College English writing teaching is composed of the foreground user display system and the background management system. It has the characteristics of centralized database management and network type customer group. There are management users (teachers or system administrators) and public users (students) of the Internet network. Therefore, in the process of system integration, the loose integration mode oriented to database sharing is adopted.
1626
G. Jianliang
Fig. 3. Design diagram of the database
5 Conclusion The automatic evaluation system in College English writing teaching provides the functions of course introduction, courseware, video, and other teaching resources upload, students; independent learning, homework management, troubleshooting, teacher-student interaction, and so on. The framework and solution of the automatic evaluation system in College English Writing Teaching Based on asp.net provided in this paper can be used as a reference for the school that is about to establish the automatic evaluation system in College English writing teaching.
References 1. Yang, Y.: To implement quality education, we must reform a uniform teaching mode. In: Liu, X.: Theory and practice of quality education in China, pp. 121–122. Reform Press, Beijing 2000–5: 2. Cai, H.: Analysis and design of network teaching system based on asp.net. Dalian University of Technology, Dalian (2005) 3. Jiang, D., Tao, C., Shen, P.: Design of Tsinghua University campus network teaching system. China data communication network 5, 5–7 (2000)
Big Data Service of Financial Law Based on Cloud Computing Yaqian Li(&) Guangzhou City Construction College, Guangzhou 510925, Guangdong, China [email protected]
Abstract. With the explosive growth of data volume in the field of financial law, how to fully tap the effective value of big data, how to deal with big data computing tasks in a timely and fast manner, to provide users with high-value and high-efficiency services, has become a problem that needs in-depth study. The impact of the legal environment on financial development and their interaction, empirical research, and reference to a large number of financial professional data, the financial legal environment under different legal systems for the actual impact of financial development are described. In the era of big data, the processing model and application technology of financial legal analysis service are facing new challenges. It is urgent to further study the service model, combination method, and big data processing technology to meet the needs of service value and timeliness. This paper first designs the model, processing flow, and system framework of financial big data analysis service, and then studies the combination method of multi-perspective learning and the key technology of task scheduling, On this basis, a big data analysis service system of securities is realized, and the financial supervision, financial law enforcement, judicial and financial legal services are fully used to improve the financial legal environment. Keywords: Big data analysis service system Cloud computing protection Financial law Cloud computing
Data
1 Introduction A market economy is a rule of law economy, and financial development needs to establish a perfectly legal environment. A good legal environment is a necessary factor for financial development, and legal system construction is the soul of financial market construction. A perfect legal system can regulate the operation of financial institutions, improve their internal control level, prevent and resolve financial risks, optimize the financial structure, strengthen financial functions, promote innovation of financial institutions, promote new financial business, prevent vicious financial competition, and enhance the stability and adaptability of financial ecology; It can reduce the transaction cost of financial activities, improve the efficiency of financial transactions, and promote the growth of transactions [1]. The condition basis of financial development can generally be understood as an organic whole formed by various external political, economic, legal, social, and cultural environment factors on which financial organizations depend for survival, competition, and development in the process of mutual © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1627–1632, 2021. https://doi.org/10.1007/978-981-33-4572-0_239
1628
Y. Li
connection and dynamic evolution, among which the legal environment is one of the key factors for the success of financial development. In a complete sense, The financial legal environment mainly includes five aspects, namely, the legal system of a financial subject, the legal system of financial market operation, the legal system of financial security, the legal system of credit, and the judicial, law enforcement and legal services. Among them, the first three are the external institutional environment on which the financial sector relies for survival and development, which to a certain extent determines the sustainability and effectiveness of financial development.
2 China’s Financial Legal Environment From the perspective of deepening the reform of financial marketization and improving the efficiency of the financial legal big data service market, how to further improve the legal environment and give full play to the role of the legal environment in financial development is an urgent problem to be solved. The liberalization of financial legal big data service is an important feature of the financial market in developed countries. However, the influx of many financial institutions also brings in a lot of profit-seeking hot money, which increases the risk of the financial system. To maintain the stability and development of our country’s finance and economy, many countries have formulated the corresponding legal system of financial market access [2]. On the one hand, we can improve the transparency of market examination and approval, reduce administrative intervention, on the other hand, we can screen many financial market subjects and reduce the risk impact of market opening. The ultimate goal of improving and popularizing the big data service environment of financial law is to create the behavior rules of financial activity subjects and guarantee the healthy development of the financial ecological environment. To a large extent, the legal service environment determines the core competitiveness of a city, even a country’s economic and financial development. For China, which is in the process of financial marketization, the legal big data service environment is even more important than other software and hardware environments. Some studies believe that Singapore’s urban competitive advantage is its perfect legal service environment, while the legal environment created by Hong Kong for decades is the core advantage that Beijing, Shanghai, Shenzhen, and other domestic financial cities can’t match in the short term, which is the biggest obstacle to the sustainable and stable development of China’s financial market. Therefore, we should invest the corresponding legislative resources to perfect and perfect the financial legal environment of our country from four aspects: first, the law should build the market access, exit and fair competition mechanism of financial organizations, make them survive the fittest, and maintain the sound development and dynamic balance of the financial market; second, The law establishes the self-regulation mechanism of the financial ecological environment, prevents and resolves the financial risks, protects the investors and enhances the stability and adaptability of the financial ecological environment by regulating the financial supervision, self-discipline of financial organizations and financial innovation; thirdly, it adopts the legislation of different types of financial organizations, financial products and financial activities, Create development space for various financial
Big Data Service of Financial Law Based on Cloud Computing
1629
organizations and financial activities, maintain species diversity of financial ecological environment, and encourage financial innovation; Finally, strengthening the protection and confirmation of property rights, investors’ rights and interests, legal relations of financial products, etc. through judicial, law enforcement and relevant legal services, and striking against financial illegal acts can optimize the economic operation environment of financial ecology; laws can also, to a large extent, guide and promote the establishment and development of positive financial culture, This is particularly prominent in the construction of a good credit environment, a good internal culture of financial institutions and financial activity patterns.
3 Research and Design of Financial Legal Big Data Service Scheme Based on Cloud Computing The big data platform in the cloud era not only supports Pb level with high cost performance and high scalability hardware system, but also massive structured, semistructured, and even unstructured data storage at ZB level. At the same time, it also needs to be able to mine the value of these data at a high speed to create profits for enterprises, and truly realize that big data equals big value. Financial legal data analysis and service are some of the typical application areas. Taking the field of financial securities trading as an example, using the current mainstream big data technology, this paper constructs a big data analysis service application to assist securities trading, studies the combination of big data-related technology and service model at the level of practice and application, and verifies the previous research results [3]. 3.1
Cloud Computing System Development Environment and Tools
1) Operating system: under Windows XP and Linux operating system, write all functional modules and codes of the system. 2) Development environment: the design of the system is divided into two aspects. One is to receive data processing commands and perform data analysis operations through specific data structure processing. On the other hand, display the user interface and verify the rationality of the interactive data on the interface. The front-end development environment of the system is QT creator 3.1.0 and MFC, and the back-end development environment is based on Microsoft Visual Studio 2010. QT is a cross-platform development framework, which is suitable for C++ GUI applications and can provide all the functions required by GUI for application developers. QT has three implementation strategies: API mapping, API simulation, and GUI simulation. MFC encapsulates the programming language interface of Windows API in the form of C++ class, and its ability to automatically generating a framework can reduce the workload of developers. The main classes include the encapsulation classes of windows built-in components and controls and a large number of windows handle encapsulation classes. Visual Studio 2010 is a tool that can integrate application server, database, and C++ development. It is also an easyto-use back-end processing development environment.
1630
Y. Li
3) Development framework: the whole platform follows the MVC framework, which refers to model, view, and controller respectively. It can map the input, processing, and output functions in the same logical graphical user interface. Among them, the model is responsible for data representation and business rule representation; the view is responsible for displaying the data processing process in the application; the controller is responsible for accepting user requests, calling model display requests, and selecting the view to display the analysis results. 4) The application developer of financial legal big data analysis service accesses the system services through the application API, and the service layer mainly translates and executes the processing flow of data analysis. The main services of the system are based on three basic service layers: management layer, HDFS access layer, and persistence layer. The controller calls the service layer interface and responds to user requests. The system makes users submit multiple tasks in a multi-user and multi-task way, and multiple tasks can be executed in the background. 5) Server: the system development process runs in a cluster of four servers, two of which are Xeon e5–2620 + 64g, the other two are Xeon e5–2650 + 128G, and they are deployed with big data technology tools such as Hadoop 2.6 and spark 1.3. 6) Database: HDFS distributed file system and Redis memory database. Among them, HDFS is a system suitable for deployment in low-cost machines, which can reflect its high fault tolerance, and is suitable for processing applications on a large number of data sets, and can improve the throughput of data access; Redis is a keyvalue database supporting memory storage, which can be inserted and deleted in the case of network interconnection, and has a persistent log function.
4 Financial Legal Big Data Service Definition of big data analysis service: big data analysis service refers to the functional entity that finds potential value in big data analysis. Based on processing and analyzing big data, it provides data analysis results to show to users so that they can assist in various decisions. The whole process inputs big data and then outputs services that encapsulate the content with data analysis results. Also, the process of big data analysis usually adopts the way of storage first and then analysis, using big data processing technology and storage tools to collect and preprocess a wide range of heterogeneous data sources and store and call according to certain rules. At the same time, for the high value of big data, it also needs appropriate processing mining algorithms and Professional Computing Methods in related fields to complete the analysis work. Finally, the end-user can get the analysis results intuitively and clearly through visualization. Big data analysis service refers to the analysis results obtained by using big data analysis ability and big data value mining technology, and then big data service providers encapsulate them as services to provide to users. Its processing object is data. It analyzes and calculates data according to the relevant technical algorithm, and provides data analysis result service when users call. It can be divided into two service modes: Online big data analysis service mode and offline big data analysis service mode.
Big Data Service of Financial Law Based on Cloud Computing
1631
Because of the new characteristics of financial legal big data analysis service, namely cube enhancement, timeliness enhancement, and data mining difficulty enhancement, a four-layer legal big data analysis service model is proposed, which mainly includes the representation of user demand layer, service layer, task layer and device layer, and the mapping relationship between computing task and device resource. The model can generate the first simulation examination service combination according to the user’s needs, and feedback the analysis results in time, which is enough to cope with the diversification, complexity, and personalization of the current user’s needs, and finally provide accurate analysis solutions for users. The big data analysis service model is shown in Fig. 1.
Fig. 1. Error analysis display interface of student’s exercises
1) User requirements layer: it is responsible for collecting the service requirements of various users. Different user requirements form a user requirements library, that is fUR1 ; UR2 ; URk g, which represents the service functions provided by the user requirements service provider in the whole analysis service. 2) Service layer: it is the core part of the whole model. It can not only provide a single atomic service but also provide a combination of analysis services to meet the highvalue needs of users. Analysis service composition set can be expressed as S1 ; S2 ; ; Sq , meanwhile - a service Sq can be composed of multiple atomic services OSq1 ; OSq2 ; ; OSqt . 3) Computing task layer: it is a key step in the implementation of services. For the complex computing task set fT 1 ; T 2 ; ; T n g corresponding to a service, one of the tasks t correspondings to a service can be divided into a group of single and independent sub-tasks fT n1 ; T n2 ; ; T n1 g, which can form a directed acyclic graph for processing and execution, such as the storage of big data, the off-line processing of big data, and the online computing of big data.
1632
Y. Li
4) Equipment layer: it is an infrastructure resource for developing and deploying big data services. It is composed of multiple physical facilities interconnected in the network, which can be represented by fP1 ; P2 ; ; Pm g. In the distributed computing environment, the device resource layer provides distributed data resource providers to find and match data resources to meet the application requirements, as well as the processing and computing functions of resources, as the input of raw data and the output of analysis results. Cloud computing platform.
5 Conclusions In the era of big data, the development of financial legal data analysis services is facing many new challenges. This paper studies a service system that supports securities users to make decisions and forecasts faster and better, which can meet the needs of users, and reflects the service value and timeliness respectively. To build a high-value and high time effective service application in the financial legal big data environment, this paper studies the financial big data analysis service. The information extracted and analyzed from big data of financial law can provide users with effective decisionmaking service support. Details of the service model. Aiming at the maximum satisfaction of users in both value and speed, this paper studies the basic service model. Its model is to integrate legal big data resources and big data analysis and processing, and directly build analysis services without taking out data, that is, users only need to submit analysis requests, and can analyze data value results according to the model. Due to a large number of users in the big data environment and the complexity of demand analysis, the service layer in the 4-tier service model needs more detailed and comprehensive particle research, which can meet the needs of a complex service through multiple sub-services.
References 1. Han, J:. Research on some key technologies of big data service. Beijing University of Posts and telecommunications (2013) 2. Zeng, W.: The idea of improving China’s financial legal system. Henan Commercial College, No. 2 (2003) 3. Qian, Y.: Market and rule of law. Comparison of economic and social systems, No. 3 (2000)
The Transformation of Traditional Enterprises to the Accounting Industry Based on Cloud Technology Xuelin Liu(&) Shandong Institute of Commerce and Technology, Licheng District, Jinan 312000, Shandong, China [email protected]
Abstract. Today, big data, cloud computing, and other technologies are widely used, data resources are being paid more attention by more manufacturing enterprises, and the ability of data mining, transformation, and application will affect the performance of enterprises to a certain extent. As an important person in manufacturing enterprises, accounting workers must learn to adapt to the development of the times, adjust the way of thinking and working methods according to the needs of modern enterprises, enhance their performance ability and innovation ability, to better play the accounting supervision function and realize the transformation of the industry. Keywords: Cloud computing Financial accounting accounting Accounting transformation
Management
1 Introduction With the continuous development of science and technology and the promotion of the trend of economic globalization, data resources have gradually attracted the attention of manufacturing enterprises. The development, collection, and management of data resources have become an important strategic means for the operation and development of manufacturing enterprises. For the traditional accounting industry, which is closely related to business management, the rise of cloud computing makes the information collected and fed back by accountants increase on a large scale, and the interaction between accountants and departments becomes more and more obvious [1]. The traditional accounting industry is facing unprecedented challenges. Therefore, financial personnel needs to change the traditional working concept and cognition, learn to collect, analyze and use relevant data from the perspective of enterprise management, and provide timely and effective accounting information for the financial situation, business decision-making, risk monitoring, and other aspects of the enterprise.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1633–1638, 2021. https://doi.org/10.1007/978-981-33-4572-0_240
1634
X. Liu
2 A Brief Overview of Cloud Computing Technology 2.1
Meaning
Cloud computing is a computing model, in which the processing, storage, software, and other services of computers are mainly provided by the virtual resource pool through the Internet. These cloud computing resources can be provided on demand and accessed from any connected device and location. It is a new data processing mode based on distributed computing, parallel computing, and grid computing. According to the deployment form, it can be divided into four types: public cloud, private cloud, community cloud, and hybrid cloud. 2.2
Development History
The concept of cloud computing was first proposed by Google ten years ago. In the past ten years, the development of global cloud computing is like a tea leaf. Cloud computing is becoming more and more mature in technology and business models. China’s cloud computing started relatively late, but its growth momentum is extremely rapid in recent years. In the era of “Internet+”, the government attaches great importance to the development of cloud computing, as early as 2010, it has been listed as national key cultivation and development of strategic emerging industries and formulated a series of guidance and planning policies. At present, cloud computing has been officially included in the 13th five-year plan for national informatization.
3 The Situation of Traditional Accounting in the Background of Cloud Computing 3.1
Based on Data Collection and Accounting Measurement
Traditional financial accounting of manufacturing enterprises lacks the support of modern information technology. It is mainly measured in the form of currency. Generally, after a business cycle, financial statements can be prepared in the form of currency measurement according to accounting elements, providing a reference for the decision-making of the management, shareholders, creditors, and other stakeholders of the enterprise. In today’s society, the amount of accounting information has been greatly improved. Due to its rich content, it is not only limited to the category of monetary measurement. Some traditional accounting methods can not deal with semistructured and structured accounting information, which can be effectively used after processing through cloud computing and other technical analysis. Also, the rise of management accounting in the background of cloud computing effectively shortens the time from currency measurement to financial statement preparation, saves the cost of accounting and financial analysis, and improves the efficiency of enterprises [2].
The Transformation of Traditional Enterprises to the Accounting Industry
3.2
1635
Based on Response Time and Content
The validity and rationality of financial information is an important assessment index of the accounting function. This shows that financial information should not only be a single feedback data but also provide direction for business development and investment decision-making. To some extent, management accounting will make up for the lack of management prediction and guidance ability in traditional financial accounting, so it will become a more practical accounting type in the era of big data. Also, the response time of traditional financial accounting for information processing is slow, for example, the preparation of financial statements needs a period, which results in the lag and impracticability of accounting information, and the quality of accounting information cannot be reasonably guaranteed. However, these problems will be solved under the high-tech processing mode. For example, after the application of cloud computing and other modern technologies, the accounting information related to the enterprise’s financial situation, operating results, cash flow, and so on will be better evaluated and predicted, so a more comprehensive information system will become one of the core competitiveness of the enterprise. 3.3
Based on the Functional Challenges Faced by Traditional Accountants
The basic function of accounting is to reflect and supervise, while the function of accounting personnel is to prepare and strictly implement financial plans, abide by relevant laws and standards, make accounting statements on time, do a good job in cash management and related settlement work. However, under the cloud computing model, the field of management accounting is still developing. Accountants should not only do a good job in traditional financial accounting, but also learn to use cloud computing, advanced communication technology, etc., improve the accuracy and efficiency of their accounting, and enhance their ability to account for comprehensive analysis [3]. Also, the transformation from accounting function to management level requires accountants not only to understand the basic accounting data, but also to know how to extract valuable information from various structured and unstructured data in the era of big data, and to learn how to use this relevant accounting information for the preparation and analysis of financial statements, which is the most important task facing the accounting of manufacturing enterprises Big problem.
4 The Significance of Accounting Transformation of Traditional Manufacturing Enterprises in the Context of Cloud Computing 4.1
The Only Way for Manufacturing Enterprises to Develop Informatization
The transformation from traditional financial accounting to management accounting can realize the diversified development of enterprise accounting information because
1636
X. Liu
the collection, processing, and preservation of financial information in the traditional financial accounting industry have been difficult to meet the information development needs of modern enterprises. Under the background of today’s big data era, high-tech technologies such as information technology, cloud computing technology, and big data technology have greatly improved the speed of information transmission, increased the amount and type of information, and management accounting perfectly meets the needs of modern enterprises for accounting information in the era of big data. It has not only the basic functions of financial accounting but also the basic functions of financial accounting with strong information and data-based functions, it provides more reliable, comprehensive, and Yuan accounting information for the comprehensive management of enterprises. The function of management accounting is to provide all kinds of useful decision-making information for internal managers and managers to make decisions. This is different from the monetary measurement of traditional financial accounting, which is an improvement of traditional accounting, to better measure the efficiency of various data in relevant decisions. 4.2
Overcoming the Limitations of Traditional Accounting in the Context of Cloud Computing
To improve the speed and quality of data processing, modern enterprises will try to combine Internet technology with financial accounting management, so that large-scale data operation can be realized in a short time, and those data processed by cloud storage technology can be saved as soon as possible. Also, this way can better handle financial data, data accuracy will be significantly improved, business information and financial information will be organically combined. This requires the accounting staff to have a solid computer foundation and sophisticated network information processing technology. On the one hand, the working method of financial accounting will change with the increase in accounting information. On the other hand, the accounting information under the cloud computing background is closely connected with all departments of the enterprise. The scope of modern accounting will be expanded unprecedentedly. The traditional accounting industry cannot meet the needs of the enterprise, so it will change to the direction of management accounting. 4.3
Meet the Increasing Demand for Accounting Services of Customers
Under the background of cloud computing, information sharing has become an indispensable part of enterprises, and accounting undertakes the important task of providing decision-making information for enterprises. Accounting information is not only indispensable for other departments of the enterprise but also the stakeholders such as the customers of the enterprise will continue to look for detailed, personalized, and diverse accounting information, which leads to the change of the accounting objectives of the enterprise. With the continuous influx of new big data technology and cloud computing technology, the number of accounting information is showing explosive growth, and stakeholders such as enterprise management, shareholders, creditors, and customers prefer high-quality accounting information. Because it is not
The Transformation of Traditional Enterprises to the Accounting Industry
1637
only an important standard for enterprise managers to make decisions but also a comprehensive index to assess the quality of enterprise work [4].
5 Countermeasures of Manufacturing Enterprises and Accountants Under the Background of Cloud Computing 5.1
Improve the Comprehensive Quality of Accounting Staff in Enterprises
The rise of cloud computing and other technologies will make traditional manufacturing accounting face greater challenges. Financial personnel needs to change their previous functional cognition, learn to think from the perspective of enterprise management and cultivate overall situation awareness. This requires them to learn more about management accounting in their daily work, improve their comprehensive ability in data integration, budget, and decision-making, cost analysis, decision support, financial management, etc., fully understand cloud computing technology, integrate accounting information from the perspective of enterprise management, give full play to the advantages of management accounting, and improve their work efficiency. At the same time, enterprises should also provide irregular training for accounting personnel according to the needs of enterprises and society, to understand the trend of accounting development and advanced information processing technology in time, and to lay a solid foundation for the transformation of employees to management accounting. 5.2
Promote the Information Construction Related to Enterprise Finance
Promoting the informatization construction related to enterprise finance is a prerequisite for the transformation of traditional financial accounting to management accounting in the context of cloud computing, which is related to whether enterprises can maximize the value of accounting information utilization. The informatization construction of enterprise finance can not only more effectively classify and integrate accounting information, improve the efficiency of information use, but also provide technical support for the analysis of enterprise financial situation, cost budget, decision support, risk monitoring, etc. Also, to build a more scientific and perfect enterprise data processing and management information system, enterprises should increase the investment in financial innovation related funds, encourage accountants to make bold innovation, more in-depth research, and build an interactive platform for enterprise accounting information, so that staff can better analyze and use some financial related semi-structured and structured data.
1638
X. Liu
6 Conclusions In the data age, cloud computing technology is still improving, so traditional accountants must learn to predict and control in advance according to the changes of financial or non-financial data of enterprises, to realize the integration of industry and finance. At the same time, the function of management accounting will be greatly strengthened. The traditional financial accounting personnel may participate in the strategic decision-making level of the enterprise and provide help for the enterprise budget and performance appraisal. Also, the transformation of accounting work needs the help and support of other business departments of manufacturing enterprises. Once the personnel of the business department cannot form effective communication with the financial department due to the backward management of the accounting knowledge reserve, the work efficiency of the enterprise will not be effectively improved. The financial department of manufacturing enterprises needs to actively contact the business department, infiltrate the related concepts of management accounting into the work of the whole enterprise, and upgrade the financial concept and business concept of the whole enterprise. All in all, with the extensive application of cloud computing technology, the transition from traditional financial accounting to management accounting has become the requirement of sustainable development of manufacturing enterprises and the only way for accounting development. Through the rational use of big data, enterprises can greatly improve the accounting-related business level, improve the internal management system, and improve the enterprise’s comprehensive management information and data-based level, to reduce the work cost of enterprises and improve the overall work efficiency of enterprises. Therefore, with the popularization of cloud computing technology, the transformation from traditional financial accounting to management accounting has become a wise move for the development of manufacturing enterprises.
References 1. Chen, X.: Research on the transformation of financial accounting to management accounting in the era of big data. Accounting Learning (2018) (18) 2. Baoying, H.: Analysis of the transformation from financial accounting to management accounting in the era of big data. Jilin Jin Financial Research (2018). (05) 3. Lu, Z.: Analysis of the transformation from financial accounting to management accounting in the era of big data. Enterprise reform and Management (2018). (05) 4. Wang, G., Liu, D., Li, S.: The construction of financial accounting transformation strategy in the era of big data. Economic Research Guide (2017). (36)
Application of Artificial Intelligence Technology in Computer Network Technology Tan Xiaofang(&), Fan Yun, and Fu Fancheng School of Computer Information Engineering, Nanchang Institute of Technology, Nanchang 330044, China [email protected], [email protected], [email protected]
Abstract. Because of the problem of uneven energy consumption in a computer network, the theory of artificial intelligence technology is introduced. Based on the analysis of energy balance computer networks, artificial intelligence technology is applied to the research of energy balance in the computer network. Experiments show that this method enables better cooperation between the nodes of the computer network and maximizes the life cycle of the computer network. Keywords: Artificial intelligence technology consumption Equilibrium
Computer network Energy
1 Introduction In computer network technology, there are many network nodes, which are limited by energy. In the continuous work, energy consumption is very serious, and the nodes are difficult to be replaced in time [1]. All of these determine that the energy consumption of nodes should be minimized in the design of a computer network. In this paper, artificial intelligence technology and the energy consumption behavior of network nodes are introduced to build a model. The equilibrium point of a computer network is determined by the model so that the nodes with large residual energy can bear more workload, and the life of the computer network is greatly extended in this way. According to the characteristics of the energy consumption behavior of the computer network, this paper establishes the energy consumption equilibrium model based on artificial intelligence technology. Finally, experiments show that the introduction of artificial intelligence technology into a computer network makes the energy balance problem greatly simplified.
2 Application of Artificial Intelligence Technology in the Computer Network In the computer network technology, because the energy consumption level of the node is not affected by its previous energy consumption, but only depends on its current behavior and state [2, 3]. In this paper, we need to be able to build a multi-computer © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1639–1645, 2021. https://doi.org/10.1007/978-981-33-4572-0_241
1640
T. Xiaofang et al.
network coupling energy balance model which is highly consistent with the real environment, and it is required to make the analysis of the problem more simple. Therefore, first of all, we will explain the basic assumptions of this model: Suppose the area is s1m There are n sensor nodes in total in the area, and they are all distributed randomly. The corresponding transmission radius of each node is r. Definition C represents the artificial intelligence technology model, which can be represented by a five tuple C ¼ ðA; U; R; W; PÞ, (1) A ¼ fa1 ; . . .; an g Represents all routing nodes. a1 refers to the i routing node,1 i N; (2) U ¼ f0; 1g It represents the corresponding state of each node in each stage. Cooperation and noncooperation are represented by 1 and 0 respectively. (3) R ¼ fRa1 ; . . .; Ran g Represents the corresponding revenue set of each node, where the revenue of node a is represented by Rai ,1 i N. (4) W ¼ fW a1 ; L; W an g Corresponding to the energy set of all nodes, and in any node ai , the main energy parameter is W costai 2 W a1 ; W remainai 2 W ai The former represents the energy consumed by the node, and the latter represents the remaining energy of the node, 1 i N. (5) P node forwarding probability, Pai aj 2 P represents the forwarding probability corresponding to the node ai when forwarding information to node ai ,1 i N. In the initial condition, for a certain node, the broadcast can be used to inform its information to the neighboring nodes, to get the global node information in this way [3]. In this process, node broadcast needs to pay a certain amount of energy, and because the energy required for broadcast is not much, but only in the networking stage, so The remaining energy after broadcasting can be regarded as the initial energy of the node. Hypothesis 2. In the model construction, the vertical component of node indexing is not considered. The node coordinates are represented by a two-dimensional scalar, the coordinates of node i are represented by (x, y), and the coordinates of node i are represented by (x0 ; y0 ) Represents the source node scale, and it can be concluded that (xE ; yE ) is the destination node coordinate. Theorem 1. At the same time, it satisfies the assumption that the number of nodes of 2 1, 2 and 3 are mq ¼ q p2 2 r . It is proved that the area corresponding to the region in the arc can be obtained by using the integral formula in combination with the picture (Fig. 1). 0
0
S ¼
pr 2 S p2 p2 2 qpr 2 ¼ q r ; mq ¼ 2 ¼ 2 2p 2 2r pr
Application of Artificial Intelligence Technology in Computer Network Technology
1641
Fig. 1. Communication range of nodes
The qualified nodes shall meet ðx x0 Þ2 þ ðy y0 Þ2 ðxE x0 Þ2 þ ðyE y0 Þ2 also ðx xE Þ2 þ ðy yE Þ2 ðxE x0 Þ2 þ ðyE y0 Þ2 Node collection pointing to the destination node. Aq ¼ fð x; yÞ; ðx x0 Þ2 þ ðy y0 Þ2 ðxE x0 Þ2 þ ðyE y0 Þ2 g also ðx xE Þ2 þ ðy yE Þ2 ðxE x0 Þ2 þ ðyE y0 Þ2 satisfy mq ¼ Aq . The set of participating nodes satisfying the above two assumptions. In the initial environment, Ai represents all nodes in the communication range of node i. In the networking stage, the nodes can broadcast information to the neighboring nodes. To analyze the network performance of the model more comprehensively, it is assumed that in the initial condition, the energy consumed by the nodes to be able to transmit information is ½W cost1 ; W cost2 ; . . .; W costn ,W cost1 W cost2 . . . W costn . Considering that the initial energy of each node is the same, when ½W cost1 ; W cost2 ; . . .; W costn is consumed, The residual energy of each node is the same, so the broadcast energy consumption can be ignored.
3 The Application of Artificial Intelligence in Network Revenue and Energy Calculation The following formula shows the residual energy of nodes BW remain ¼ ½W remain1 ; W remain2 ; L; W remainN T For any node i, the expected energy matrix consumed by the first transmission is EðW costi Þ ¼ P BW cost ¼ ½P1 gW cost1 ; L; Pm gW costm
1642
T. Xiaofang et al.
The expected matrix of residual energy after each transmission is 0
E ðW remaini Þ ¼ E ðW remaini Þ E ðW costi Þ
0
Define the energy surplus matrix of the final node: BW remain ¼ ½W remain1 ; W remain2 ; L; W remainN T The equilibrium of the computer network exists in a random game. If the number of States and behaviors is limited, then Markov equilibrium exists in a random game. Theorem 2. N represents the number of sensors in the coupling model. For 8P; 9P (* representing a specific element in the set), which can make the network lifetime reach the maximum value, then it means that the computer network equilibrium is reached, P ¼ f ða b Þ. BW remain is a function of a, b. If eð0 e\1Þ is used to represent the node's corresponding mortality rate, suppose a is used to represent the node I's successful transmission rate, then 0
A ¼1
e
N
S11 N
S11
SC 4
SC 4
¼1e
If the mortality rate is e, then the probability of successful transmission from the source node to the destination node can be expressed as follows A0 ¼ A0 Ntime ¼ ð1 eÞNtime
4 Experiment Simulation and Analysis 4.1
Experimental Environment
To verify the effectiveness of the proposed method, we need to carry out the relevant experimental analysis. In the experiment, the random weighting algorithm and the ant colony algorithm are tested as a comparison. The simulation experiment and analysis in this paper are used. The data is the real data of online nodes in the tor network. During the simulation, there are 1584 nodes in the tor network, including 420 human/exit nodes and 554 human/exit nodes. 4.2
Distortion Performance Simulation Test
There are 400 nodes in the tor network of 45 45. In this range, the source n0 coordinate (x0; y0 ) is regarded as the origin according to and n. In the graph, we describe the results of distortion comparison among the algorithm, random weighting algorithm, and ant colony algorithm (Fig. 1).
Application of Artificial Intelligence Technology in Computer Network Technology
1643
It can be seen from the analysis of the pictures that, with the gradual increase of the number of nodes, the distortion of the algorithm, genetic algorithm, and random weighting algorithm in this paper increases gradually. The distortion value obtained by using the algorithm in this paper is lower than that of genetic algorithm and random weighting algorithm, which shows that the algorithm in this paper has the lowest distortion and can best maintain the integrity of the information.
Fig. 2. Comparison results of three algorithms for distortion
4.3
Security Simulation Test
To test the security of the algorithm in this paper, a genetic algorithm and random weighted network algorithm are compared, and the probability cumulative value FðXÞ is taken as the security evaluation index. Probability cumulative value is the security cumulative value of all previous healthy nodes. The higher the probability cumulative value is, the higher the security is. The probability cumulative value ðF\M) comparison results of the three methods (Fig. 3). It can be seen that for a different number of attack nodes, the probability cumulative value of this algorithm has been significantly higher than that of genetic algorithm and random weighted network algorithm, which shows that compared with the other two algorithms, using this algorithm to select healthy communication nodes can better ensure the security of communication.
1644
T. Xiaofang et al.
Fig.3. Security comparison results of three algorithms
4.4
Simulation Test of Node Residual Energy
The node residual energy is the embodiment of the energy consumption uniformity of node communication. The higher the node residual energy is, the less the communication energy is. Figure 4 describes the change of the node residual energy of the algorithm, genetic algorithm, and random weighting algorithm with the change of time.
Fig.4. Change of node residual energy
In the case of increasing network working time, the minimum residual energy of the genetic algorithm, and a random weighted network algorithm is always higher than that of the algorithm in this paper. On the whole, the residual energy curve of the genetic algorithm is higher than that of a random weighted network algorithm, while the node residual energy of the algorithm in this paper is significantly higher than that of the
Application of Artificial Intelligence Technology in Computer Network Technology
1645
genetic algorithm and random weighted network algorithm. This means that compared with the other two algorithms, the algorithm in this paper can consume the energy of each node more evenly.
5 Conclusion Because the energy consumption of each node in the computer network technology is uneven, this paper introduces artificial intelligence technology to solve this problem and realizes the research of energy consumption. With the development of artificial intelligence technology and the increasing demand for its application in computer network technology, the application of artificial intelligence in the computer network technology will be more and more extensive, which plays an important role in promoting the safety management of computer networks and system rating. Acknowledgments. Science and Technology Research Project of Jiangxi Provincial Department of Education, Project Name: The Application of Artificial Intelligence in Computer Network Technology (Subject No.: GJJ180993).
References 1. Broda, M., Frank, A.: Learning beyond the screen: assessing the impact of reflective artificial intelligence technology on the development of emergent literacy skills. Plant Physiology (2015) 2. Bernardi, A., Sintek, M.: Combining artificial intelligence, database technology, and hypermedia for intelligent faultrecording. Clin. Exp. Metas. 32(4), 383–391 (2015) 3. Chaudhri, V.K., Gunning, D., Lane , H.: Intelligent learning technologies part 2: applicationsof artificial intelligence to contemporary and emerging educational challenges. Ai Magazine 34(4), 10–12 (2013)
Application and Research of Information Technology in Art Teaching Haiyin Wang(&) Chonnam National University, Gwangju 571199, South Korea
Abstract. Information technology is the general trend of economic and social development in the world. The integration of information technology and other disciplines has become a new bright spot of school education and teaching reform in the 21st century in China. The design and development of the network learning system have become a hot field of education research, And the “student-centered” adaptive learning system is gradually replacing the “teacher (system) centered” network learning system. Information technology provides a lot of information and various means for art teaching and provides us with deeper and broader potential for the teaching content, teaching methods, and learning methods of art. Therefore, art teachers must combine advanced education ideas and methods with information technology to better achieve the basic purpose of art teaching. In the process of art learning, they use audio, light, electricity, and other visual presentation to stimulate and attract the attention of students and stimulate the students to learn art. Finally, the user model ontology is established, and the user model is improved by data mining technology. Keywords: Adaptive learning system Information technology
User model Art teaching
1 Introduction The process of information technology education is the process of students’ hands-on practice. A kind of “student-centered” self-adaptive learning system has attracted the attention of domestic and foreign education fields, and gradually replaced the previous “Teacher centered” system architecture, requiring students to adapt the system’s online learning system, Furthermore, we should continue to explore and practice individualized teaching based on network teaching [1]. In the process of completing work and using a computer to study, students need to use their brains, imagination, and hands-on practice. Developing an information technology course is an important way to cultivate students’ innovative spirit and practical ability. If our information technology course is carried out according to the teaching mode of teaching, learning, and testing, it will shackle the students’ innovative consciousness. In the current new curriculum reform, we should pay special attention to the cultivation of students’ innovation ability [2, 3].
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1646–1651, 2021. https://doi.org/10.1007/978-981-33-4572-0_242
Application and Research of Information Technology in Art Teaching
1647
2 Efficient Use of Information Technology to Expand the Content of Teaching Materials Teachers should vigorously develop and use information technology means in the classroom, because art is not only a plastic art but also visual art. There are a lot of teaching contents in the teaching materials of the art class that need to be presented through intuitive picture materials. Multimedia can combine the text, graphics, image, video image, animation, sound, and other media to carry information, and carry out comprehensive processing and control through a computer, and organically combine various elements of various media on the screen, and complete a series of on-line interactive operations Information technology. It can produce a lively effect and help to improve students’ interest in learning and classroom teaching efficiency. The network resources in the field of information technology provide us with a powerful guarantee and knowledge reserve in terms of resources and forms for art education and teaching. Therefore, as the content of teaching materials, the network resources have a great extent of supplement and expansion, effectively realizing the input of information [4]. 2.1
Teachers Should Make Full Use of the Advantages of Information Technology and Turn Words into Ideas
In the process of teaching, teachers often encounter some knowledge difficulties that are difficult for students to understand. It is very difficult to overcome these difficulties by using single and boring preaching and traditional teaching methods. The use of computer-aided multimedia teaching helps students to understand, such as the “artistic conception of landscape painting” lesson. The artistic conception is not clear with the teacher’s mouth. It requires the use of sound, light, electricity, shape, color, and many other functions to transmit teaching information through multiple channels, with a stronger sense of reality and expression, and a variety of scenarios Setting makes people feel as if they are in the real world. They can even bring extracurricular sketching to the classroom and can solidify the rich and three-dimensional real world on the same plane, at the same time, they can more accurately reflect the reality of the real world. This is because other teaching methods are not prepared. The unique advantage of information technology means is also the advantage of using information technology means in art class compared with other disciplines. Teachers should guide students to exchange views, share information, observe, compare, analyze and evaluate each other through online teaching resources, which is beneficial for students to The understanding of problems, the mastery and application of knowledge are conducive to the cultivation of students’ character of mutual assistance. Students should be encouraged to use Internet resources, consult rich art information, broaden their horizons, display their artworks, and exchange, so that the knowledge of art classroom can be extended and expanded in the network, to improve students’ aesthetic appreciation ability. In a word, the integration of art teaching and information technology education makes art classroom teaching refreshing and has a breakthrough; it makes teaching difficult to be easy and complicated to be abstract and intuitive.
1648
H. Wang
3 Adaptive Learning System and Its User Model An adaptive learning system is a learning system that provides learning support suitable for individual characteristics in the process of individual learning. An adaptive learning system is essentially a personalized learning support system, which can provide a user view that adapts to the user’s personalized characteristics. This personalized learning view includes not only personalized resources but also personalized learning processes and strategies. This requires building a user model for each user, which contains the user’s knowledge level, goal, preference, and other information. In the application, according to the model, the user’s browsing information space is reduced, and the most interesting information and links are displayed to the user. With the development of a series of research on network teaching effect and learning style abroad, the network-based personalized learning software system for learners has also come out one after another, but most of the construction of these learning system students’ models is the construction of students’ cognitive models, including students’ knowledge level, emotional factors, learning motivation, etc. The student’s cognitive model is important, but not complete; the student’s model should also include the student’s interest model, that is, according to the student’s cognitive characteristics, simulate the student’s interest and preference, store the student’s interest, purpose, task, and other information, and provide targeted services for students according to this information. To identify students and establish a variable user model, it is necessary to monitor every step of students’ operation, record the feedback information of students to the system, and infer students’ cognitive structure and interest from their low-level operation behavior, to provide a basis for modifying their interest model. To stimulate students’ enthusiasm for learning the art.
4 Estimation of Students’ Cognitive Art Level The most obvious link for students to show their art cognitive level is the practice of painting level. This paper mainly aims at this link to the reason for students. According to Bloom’s cognitive theory, students’ understanding of art knowledge is divided into six levels, namely, knowledge, comprehension, application, analysis, synthesis, and evaluation, which are continuously deepened in the order of 1 to 6; for a certain level of test questions, the difficulty is also different, and the contribution of different difficulty questions to the analysis of students’ art cognitive level is different; also, The speed of the students in doing the questions reflects their familiarity with art knowledge. Students make painting questions in normal time to show that they are familiar with the painting knowledge; they make questions between normal time and the longest time, which shows that they are not familiar with it; if they exceed the longest time, they think they are not familiar with the knowledge. Because students’ familiarity with a certain painting knowledge is fuzzy, membership function is used to express familiarity. This membership function is the most commonly used membership function s function in a fuzzy set. Let a be the normal time to do a question, c be the maximum time to do a question,b ¼ ða þ cÞ=2. The membership function fðtÞ of students’ familiarity is defined as:
Application and Research of Information Technology in Art Teaching
1649
ð1Þ
The basic principle of this idea is that when the user works on a level of questions, the rising rule should be used for those below this level, and the falling rule should be used for those above this level. Where C ¼ cD FðtÞ is constant and 0\c\1; D 2 f0; 2; 5; 0; 5; 1; 2; 4g. From the formula C ¼ cD FðtÞ, it can be seen that when the student’s time is in the normal range, FðtÞ ¼ 1, then C ¼ cD , the value of parameter C increases; when the student’s time is out of the normal range, as FðtÞ decreases, the value of C becomes smaller and smaller; finally, when the student’s time exceeds the maximum range, FðtÞ ¼ 0, then C ¼ 0, the above rules will not work. This is also in line with the actual situation. The longer the student’s time is, the less proficient he is in mastering the content, the smaller the change of the corresponding cognitive level should be. Through practice, we can infer students’ cognitive level of a certain art knowledge point and then decide whether to recommend new art knowledge points for follow-up learning and dynamically recommend test questions to users. 4.1
Extraction of Students’ Interest in Art
In this system, the expression of the art knowledge of middle school students’ interest model is based on the idea of a space vector model, and ontology is introduced into it. The expression method based on the combination of ontology and the space vector model is adopted, that is, the keywords in the expression method are changed into the concept weight in the ontology. We use an ontology to express the art field that students are interested in. These ontologies usually take the form of hierarchical concept trees, each node of which represents an interesting class of students. There are various relationships between high-level classes and low-level classes, and the degree of interest in the concept of ontology should also be given a certain weight. The weight can be Boolean value and real value, indicating whether students are interested in a certain art concept and the degree of interest. For the representation of the text, the commonly used space vector model is still used. The biggest advantage of introducing ontology to represent the student model is that it can realize the reuse and sharing of art knowledge, including the sharing of ontology samples among students and the exchange and sharing of art knowledge with other external ontology. This method solves the problem of synonymy and ambiguity of words in the vector space model. Text feature representation For document set D ¼ fd1 ; d 2 ; ; d n g Any document d i is represented by the vector space model as Vðd i Þ ¼ ðt1 w1 Þ; ðt2 w2 Þ . Where n is the number of eigenvectors in document d i , t1 is the ith eigenvector of document d i , and w1 is the weight of Ti in
1650
H. Wang
document d i i. Because t1 can be repeated and sequenced in the text, I t is still difficult to analyze. To simplify the analysis, the order of Ti in the document is not considered temporarily and ti is required to be different from each other. In this case, t1 ; t2 ; tn is a coordinate system of N dimension, w1 ; w2 ; ; wn is the corresponding coordinate value, so Dðw1 ; w2 ; ; wn Þ is regarded as a vector in n-dimensional space. We call it Dðw1 ; w2 ; ; wn Þ is the vector representation of text D. The difference between documents is transformed into the angle between feature vectors, as shown in Fig. 1:
Fig. 1. Vector space model of text and similarity between texts
The student model in the system mainly focuses on students’ study of art style, cognitive painting level, and art interest preference. For the students’ art knowledge, the lead model is mainly used; Solomon learning style scale is used as a pretest to test the students’ learning style; for the estimation of art cognitive level, the idea of a fuzzy set is used, which has strong operability.
5 Conclusions The emergence of the Internet has greatly shortened the time and cycle of knowledge and information dissemination. The network has become an important way for people to obtain information and communicate with the outside world. Our art teaching should make full use of the network, obtain the latest art education resources, develop new teaching contents, explore new teaching methods, and carry out the exchange of students’ works and teachers’ teaching achievements between our school and domestic and international. Therefore, network-based art teaching mode needs to be further explored and studied. The curriculum reform advocates independent learning puts forward an efficient classroom teaching model, and emphasizes the role of students’
Application and Research of Information Technology in Art Teaching
1651
preview before class, while the network just creates conditions for students to carry out new learning methods so that students can learn to explore and find problems related to art in the new situation, develop the ability of exploration and discovery, and form the ability of a comprehensive solution to problems. In the research-based learning of network-based adaptive learning system, teachers should provide students with network navigation and technical services, design challenging research tasks, so that students can achieve their goals through efforts. Second, the main purpose of research-based learning based on network adaptive learning system is to cultivate students’ ability to collect data, analyze data and draw conclusions, as well as the ability to express ideas and exchange results, to provide a way full of fun and confidence for students to understand, express, create and express themselves. It can be said that the network brings infinite possibilities for art appreciation teaching.
References 1. Chen, W., Jin, J.: Data Warehouse and Data Mining, vol. 104, pp. 147–149. People’s post and Telecommunications Press, Beijing (1) (2004) 2. Zhi, Z.: Estimation method of cognitive level of middle school students in adaptive distance learning system. Comput. Eng. Appl. 43(3), 220–222 (2007) 3. Meng, L.: The role of modern educational technology in Art Teaching. J. Henan Radio Television Univ. (2003). (04) 4. Hu, L.: Application of information technology in Art Teaching. Charming China (2009). (33) 5. Mao, M.: On information technology education and Art Teaching. Charming China (2009). (32) 6. Shi, D.: Combining information technology to improve art teaching. Qinghai education (2009). (Z3) 7. Chen, M.: Practice and Thinking on the integration of information technology and art pedagogy. J. Liuzhou Normal Univ. (2003). (02) 8. Ma, G.: On the application of information technology in Art Education. China Science and education innovation guide (2009). (19)
The System Construction of Computer in the Transformation of Old Urban Areas in China in the Future Chaodeng Yang(&) Zhejiang Agricultural Business College, Shaoxing 312000, Zhejiang, China [email protected]
Abstract. There are not only text images, but also the reality in Lilong. The contrast between memory and reality in Lilong forms a huge tension, which constitutes a lasting debate in the protection and renewal of Lilong. In the text, the neighborhood of Lilong is harmonious, the culture of the marketplace is prosperous and inclusive. The record of the text and the memory of the residents influence each other, which to a certain extent constructs the cultural image of Lilong and becomes Lilong. The carrier of protection. But in reality, Lilong is a low-rise house in a prime location, inhabiting a crowded immigrant community, where old houses coexist with the elderly population. The future of Lilong needs to be solved in memory and reality, and realize the transformation from residential function to cultural function in the way of diversity and sharing. Keyword: Lilong Culture Development Location The remaining lanes Cultural value
1 A Memory of Lilong – Lilong in the Text In Shanghai, when it comes to Lilong, it often involves home and neighborhood, and points directly to the softest place in people’s heart. Long carries the laughter of childhood, the watch of parents, and the warmth of the neighborhood. Lilong used to be the most important place for Shanghai residents. By the 1990s, more than half of Shanghai residents still lived in Lilong. With the commercialization of housing and the expansion of Shanghai, the proportion of residents living in Lilong is less than 10%. For many Shanghai citizens, Lilong life has become the past, but the memory, description, and imagination of Lilong are constantly produced. There are many narratives about Lilong in literature, film, and media communication. In addition to the material and spatial environment of Lilong, there are at least three kinds of classic images of Lilong, i.e. the prosperity and inclusiveness of neighborhood, land, and city culture.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1652–1657, 2021. https://doi.org/10.1007/978-981-33-4572-0_243
The System Construction of Computer in the Transformation
1653
2 Harmonious Neighborhood Lilong is the creation of Shanghai’s exploration of urban residential forms. After the opening of the Shanghai port, there were many landlords and rich businessmen who came to Shanghai in the early stage. They retained the habit of traditional rural life. Based on the quadrangle pattern, they absorbed the form of Western townhouses and formed the early Shikumen lane, which could not only meet the traditional family life but also make use of the courtyard to approach the nature and adapt to the requirements of urban intensive construction. With the increasing number of people entering Shanghai, Also, the design of Lilong house has also expanded from single bay to multi Bay, and the scale of Lilong has also expanded. [2] The crowded living conditions in Lilong make it difficult to distinguish public space from private space. The communication and sharing in public space increase the mutual assistance of residents. The familiarity and mutual assistance of the traditional rural neighborhood is the lack and yearning of today’s urban living, while the warmth and mutual assistance of the Lane neighborhood are considered to be the classic of the traditional neighborhood relationship. The demolition of Lilong Often leads to nostalgia, many of which are reminiscences of traditional neighborhood relations. The Central News Film Studio once filmed a national famous documentary “neighborhood” in Jiaxing alley, a well-known Shikumen Lane Residence on the beach of Shanghai, recording the mutual understanding and harmonious coexistence of residents. After the Spring Festival on February 12, 2002, the century-old alley was to be demolished. Wenhui was on February 12 of that year (the beginning of the first month of the lunar calendar 1), with the title of “forever new year’s Eve dinner”, the front page headline reported the neighborhood of Lilong.
3 The Prosperity of Marketplace Culture Lilong not only has the living function, but also has the small shops and stalls needed in daily life, such as Pan-Fried Bun stall, tiger stove, hardware store, barbershop, tailor shop, sewing grocery store, etc. there may also be a Lane factory in the large lane, and the citizens are operating daily life in the cramped space. “I miss the special atmosphere unique to the old alley hall in Shanghai. I think it will arouse many Shanghai People’s childhood memories. I can’t do without the light of the paper and tobacco shop. It’s hard to use “good” or “bad” to make a simple comment here, but it can be described as “marketplace”. [4] Xia Yan’s works “under the eaves of Shanghai” and “the lights of thousands of homes” reflect the poverty of the people in the 1930s and 1940s. [5] Even when they are in difficulties and have fun, it’s very common to eat Lane food in the lane, that is, to put the table in the lane to eat, but when to put it and when not to put it, it’s also exquisite: “I’ve seen a family, usually like to eat Lane food, but their sense of discretion is very good: for example, the dishes are very poor, and end in the room; the dishes are also put on the table It’s necessary to eat in the alley. If the dishes are extremely rich, it’s not a matter of simply eating in the alley. It’s necessary to get all the people in the alley to know that they are eating here at this moment. At that time, they will warmly invite everyone who comes to the dinner table and says “how about a
1654
C. Yang
small dish?” some old neighbors are not afraid of being said to be cheeky. They take a sip of the beer they poured, sit down, eat and talk with the host’s house, and even five or six people come in a row. They crowd together and laugh Language”. [6] This kind of description is a good response to Cheng Naishan’s description of “marketplace” that “this kind of citizen culture is in the same space because of the diversity and complexity of residents’ cultural quality, As well as interaction, it is difficult to be lax, degenerate, and form an elegant cultural atmosphere. In this way, Lane culture naturally becomes a secular but harmonious civic culture, which constitutes the main aspect of Shanghai civic culture. [7].
4 All-Inclusive The characteristics of Lilong architecture are “not in the middle of the west, half in the middle of the west, and also in the middle of the west”. Long life is a combination of tradition and the West. Which way to adopt is mostly from a practical point of view. The residents of Lilong include all kinds of social elites, small traders, immigrants from all over the country, and foreign nationals from all over the world. Seventy-two tenants present the life of the bottom residents of Lilong, Shanghai. The pavilion is a room with poor conditions in the Lilong house. In the 1920s and 1930s, it was the residence of Shanghai literati and writers. The pavilion became an adjunct of Shanghai’s literary life, and later became an adjective of writers, such as “scholars in the pavilion”, “writers from the pavilion”. Ba Jin, Zhou Libo, Yu Dafu, Liang Shiqiu, and other famous writers once lived and created in the pavilion. [8] To protect themselves, the rich built their mansion at the end of the alley for rent instead of on the street. The scale of the mansion is not small, the interior is exquisite, but the appearance is similar to the surrounding alley houses. [1] There have been many overseas Chinese living in Lilong, Shanghai. If they live close to each other for a long time, the relationship will be adjusted very naturally. From the May 30 Movement in 1925 to the end of the AntiJapanese War in 1945, Shanghai did not have a particularly significant incident of foreign resistance against the westerners. At least part of it can be explained by the peaceful coexistence, mutual understanding, and familiarity of foreign nationals and Shanghai residents in Lilong. [9] The combination of Chinese and Western architecture, the mixed living, commercial and industrial functions of Lilong residents, and the entry and exit of writers, financiers, employees, small traders, prostitutes, and other people in Lilong show its great tolerance. The harmony of neighborhood, prosperity, and inclusiveness of urban culture in Lilong not only exist in all kinds of literary works but also the memory of citizens. In the process of mutual influence and reinforcement of memory and literary works, the cultural spirit and value of Lilong are increasingly highlighted. Memory and Lilong in the text, to a certain extent, become the carrier of Lilong protection.
The System Construction of Computer in the Transformation
1655
5 The Reality of Lilong as the Lilong of the Text 5.1
A Low House in a Prime Location
At present, most of the lanes reserved in Shanghai are located in the prime area of the city, surrounded by high-rise buildings. Compared with the modern buildings with clear windows and towering clouds, the lanes are increasingly low. The surrounding high-rise dense buildings show the value of gold in the golden section. Only by the living function to measure, Lilong low is undoubtedly inefficient, if the same low plot rate villa construction in the city center, its price must be sky-high. Due to historical reasons, most of the houses in Lilong are right to use, and the residents only pay very low rent. Although it is rented, many residents have lived here since they were born. The government has acquiesced its right to live for a long time. Considering the factors such as demolition compensation, no one will take the initiative to withdraw from the right to rent. The low Lane in the prime area has long assumed the residential function. If the government wants to change it, it needs to come up with a high compensation scheme. After evaluation and consideration by the government, compensation cannot be completed. The low lane has to maintain the status quo and rely on the decentralized market forces to naturally update. More capital has been invested in the surrounding golden area, which is cleaner, cleaner, more comfortable, and more modern. However, the golden area has accumulated traces of time and lacks function and capital injection. The contrast between the two is increasingly obvious, sometimes achieving dramatic and shocking results.
6 A Crowded Immigrant Community When I was investigating many lanes in Shanghai, from time to time, some residents gathered around me and asked eagerly, “do you want to demolish? When is your investigation useful to help us demolish?” “can you help us to move as soon as possible?” “let’s see how many houses we live in are crowded in a small loft of several square meters, It’s almost 70 years since we were liberated. Our living conditions are still in Shanghai before liberation. Have you ever seen such poor living conditions in other places? Looking at the actual living conditions of the residents in Lilong, I deeply feel their eagerness to move away from Lilong: the houses are old, and a few people live in a few square meters of narrow air with poor ventilation and lighting conditions, there is no bathroom facilities, no toilet, or toilet installed, but the location is simple and crooked. Then there are narrow and steep stairs, dark corridors, and public kitchens full of sundries. Due to the disconnection between the living function of Lilong and modern life, most of the residents who have the ability and conditions moved out of Lilong and rented out the vacated rooms. Most of the tenants are “migrant workers” from all over Shanghai, some are from families, some are from a couple, and some are from public rental groups. The immigrants have spacious houses in their hometown, and they come to Shanghai to work. They pay more attention to their income and have low requirements for living conditions. The reason why immigrants choose Lilong is that they are interested in their location, convenient transportation, and cheap rent.
1656
C. Yang
In some lanes, immigrants account for more than half of the total number of residents. To save rent, the area of rented houses can be saved. To rent out the houses, the landlord divides the rooms again and again. The common situation is that there are several households and more than a dozen residents in a doorway. Needless to say, Lilong is a residential area with small per capita living area, crowded living area and large immigrant population in Shanghai
7 The Future Culture of Lilong: Diversity and Sharing The culture of the city needs to be inherited, and the function of the city also needs to be updated. There were many levels of Lilong in the past, including those with high construction quality and cultural and historical value, and those with general construction quality and low historical value. For different types of lanes, different ways should be used for demolition, renewal, and protection. Speed up the progress, increase efforts, and dismantle several lanes with low historical and cultural value and unsatisfactory living conditions. In the memory of lane is beautiful, but the real dilemma of some lanes is also obvious. In the name of cultural protection, it is unfair to let residents live in places with an extremely poor living environment. Although any historic building and block demolition will always bring some people nostalgia and reluctant to part with it, demolition can be said to be thought of by the residents of Lilong who are looking forward to demolition for a long time. Falling red is not a ruthless thing. It turns into spring mud to protect flowers. The demolition and reconstruction of the areas that are not suitable for urban development and residents’ living have won space for the new development of the city and created conditions for the residents of Lilong to improve their living conditions. Of course, there are some problems in the demolition of these lanes, such as the balance of interests, the way of implementation, the records of archives, and the public opinion. Therefore, the government should take greater responsibility and more responsibility to remove all kinds of obstacles, and steadily promote the demolition of the lanes that should be demolished. The material space of Lilong can be demolished, but the archives, memory, and culture of Lilong can also be inherited and protected. For the demolished Lilong, the history and culture of Lilong should be recorded and inherited by oral, video, and archives. As for how to protect the preserved lane, the transformation from residential function to cultural function can be realized through multiple and shared ways. Why realize the transformation of functions? Lilong is designed for living, and the memory of Lilong is related to living, home, neighborhood, etc. However, if the left Lilong is mainly to meet the living demand, it will not be able to undertake the mission of Lilong in the new development stage of the city. The left protection Lane lies in the location and architecture Features and cultural values are scarce resources. If the house quality is improved after renovation, it will be able to inject several new functions suitable for the current development of the city, such as cultural and creative industry, historical and cultural display, featured catering, performance, etc. These functions are needed in the new development stage of the city. If we can continue these functions in the old city, we can achieve a win-win effect. After the material space of Lilong contains new functions, it can produce new urban memory, and the image of Lilong is more abundant Rich and
The System Construction of Computer in the Transformation
1657
can be inherited. The cultural and creative industries based on Lilong will have more characteristics, such as regional, non-replicable, and so on. The success of Xintiandi and Tianzifang is the transformation from residential function to cultural function. Although there are comments that the shape of the new world Lane remains, its function is completely changed, because of its particularity and huge investment, it can not be a reference for other lane protection. Tianzifang has retained the residence function, Yes, but the reserved living quality is not high, which is greatly affected by the business. If we look at the reservation of the residential functions of Lilong, these two cases will undoubtedly be criticized, but from the perspective of exploring the changes in the functions of Lilong, these two are successful cases. In today’s Shanghai, the residential function of Lilong is no longer so important, but how to tap the cultural value of Lilong is more important. In history, Lilong is characterized by prosperous and inclusive market culture. Today, the renewal of Lilong should Have a diverse, open, and shared mind. The mode of lane renewal should be diversified and shared. The transformation of the Lilong function can be compared to cultural recreation. Culture has regional characteristics. There are many forms of Lilong in Shanghai, and its renewal and protection modes should also be diverse.
8 Conclusions In this paper, a simulation model of a vector control system based on MATLAB/ Simulink is established, which highlights the advantages of MATLAB software, such as intuitive, convenient, no programming, simple, and so on. Through the simulation of traction and braking conditions of CRH3 high-speed EMU, the simulation results verify the feasibility and effectiveness of vector control in the field of AC asynchronous motor control, which is similar to the real situation. It shows that vector control has good dynamic and static characteristics in the field of AC asynchronous motor control.
References 1. Luo, X., Wu, J.: Shanghai Lane. Shanghai People’s Art Publishing House, Shanghai (1997) 2. Lu, H.: Beyond Neon Lights: in Daily Life at the Beginning of the 20th Century, pp. 135– 145. Shanghai Ancient Books Publishing House, Shanghai (2004) 3. Zhang, X.: Lane Hall Nostalgia, pp. 185–186. Baihua Literature and Art Press, Tianjin (2002) 4. Cheng, N.: Shanghai Fashion, p. 201. Shanghai dictionary press, Shanghai (2005) 5. Luo, S.: Modern Shanghai: Urban Society and Life, pp. 60–70. Zhonghua Book Company, Beijing (2006) 6. Guan, J.: An Unforgettable Plot of a Lane in Laochenguang, pp. 126–127. Shanghai Dictionary Press, Shanghai (2005) 7. Luo, X., Wu, J.: Shanghai Lane, pp. 142–143. Shanghai People’s Art Publishing House, Shanghai (1997) 8. Li, O.: Shanghai modern. Shanghai: life reading Xinzhi Sanlian Bookstore Shanghai Branch, pp. 39–43 (2008) 9. Lu, H.: Beyond neon Lights: In Daily Life at the Beginning of the 20th Century, pp. 284–285. Shanghai Ancient Books Publishing House, Shanghai (2004)
Application of Cloud Computing Virtual Technology in Badminton Teaching in Distance Education Feng Xin(&) Northwestern Polytechnical University MingDe College, Xi’an 710124, Shaanxi, China [email protected]
Abstract. With the rapid development of Internet technology, distance education has become an important part of education in China, but the development of traditional distance education technology is not perfect, which has become a bottleneck restricting the development of network education. Because of the current situation and existing problems of distance education, this study combines cloud computing technology with a distance education badminton teaching platform and designs a distance education badminton teaching platform based on cloud computing technology. Through simulation test and comparison with traditional distance education platform, it shows that the platform has great advantages in user information security, resource storage and sharing, network service quality, and interaction, and has great research potential. Keywords: Cloud computing Distance education Virtualization technology Resource sharing Badminton teaching
1 Introduction With the continuous improvement of badminton competitive level in China, badminton is booming in the national fitness, and more and more badminton clubs are developed in the city. Many universities and Vocational Colleges in China also take advantage of this opportunity to carry out badminton teaching in school physical education. Badminton teaching has become an important part of school physical education, which is deeply loved by students. To better carry out badminton teaching, this paper studies the application of multimedia teaching in badminton, hoping to provide some reference for promoting the reform of badminton teaching. Compared with traditional teaching, multimedia teaching combination has a strong advantage, which is in line with the characteristics and teaching objectives of school badminton teaching. It can not only improve students' interest in learning, stimulate students' subjective initiative, but also help schools to establish a correct action image, quickly grasp the technical action essentials, optimize the teaching effect, and promote teachers' teaching The change of ideas and the modernization of teaching methods cater to the trend of modern teaching development.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1658–1665, 2021. https://doi.org/10.1007/978-981-33-4572-0_244
Application of Cloud Computing Virtual Technology in Badminton Teaching
1659
2 Cloud Computing Technology 2.1
Introduction to Cloud Computing Technology
Cloud computing technology can be defined as a computing technology that can quickly access shared resources through the Internet technology [3]. Using cloud computing technology, people can quickly apply for and release shared resources according to business volume and pay for the shared resources used, to improve the service quality of resources and reduce the cost. 2.2
Characteristics of Cloud Computing Technology
According to the definition of cloud computing technology, the characteristics of cloud computing technology are summarized as follows: (1) Rental Resources: cloud computing provides users with rental services of network, storage, computing, and other basic resources, and users do not need to maintain these network resources. (2) Resource sharing technology: manage resources in the way of resource sharing, share resources for different users through virtualization technology, and users can freely manage and allocate network resources. (3) On-demand distribution services: provide users with services such as resource storage, computing, and application programs, and automatically allocate services according to users' needs, without additional management intervention. (4) Billing service: the system will monitor the user's resources and service usage, and then conduct quantitative charging. (5) Changeable service: the service provided will be changeable according to the user's needs, saving the user's use cost. (6) Interface generalization: users can use computers, mobile phones, and other terminal devices to easily seek services from cloud computing. 2.3
Advantages of Cloud Computing Technology
Cloud computing technology is based on the Internet, which combines the existing hardware and software facilities organically, constructs a new network service platform, calls, and shares all resources, and provides users with suitable intelligent services. Compared with other network services, the advantages of cloud computing technology mainly lie: (1) Flexible access to resources: cloud computing technology takes parallel computing as its core technology, calculates task volume according to demand, mobilizes computing resources, and then provides complete data processing services. (2) Unified management integration: compared with other network services, cloud computing management is unified. All resources are mobilized through the cloud platform, and a unified management method of computing volume is implemented.
1660
F. Xin
(3) Improve the utilization of network facilities: cloud computing technology will greatly improve the utilization of network facilities through virtualization technology. (4) Save the cost of network use: cloud computing technology facilities are simple to configure, easy to build and maintain computing platform, and charge in the way of providing network services, users pay on demand and do not need to buy expensive equipment.
3 Badminton Teaching in Distance Education 3.1
Current Situation of Distance Education
In recent years, with the rapid development of Internet technology, distance education has gradually entered people's vision, which combines computer technology, communication technology, Internet technology, and multimedia technology, and uses text, image, video, and other methods as the main teaching methods. Compared with traditional classroom teaching, distance education has the following characteristics: (1) Teaching is not limited by time and space. With the support of Internet technology, the teaching of distance education is no longer limited by time and space. Students can learn through the platform of distance education anytime and anywhere, breaking through the single teaching mode of traditional teaching, so that more people can experience the charm of Distance Education. (2) The traditional education model of sharing more education resources limits the teaching environment to the classroom, and the excellent teaching resources can only be limited to the campus. After the establishment of the distance education platform, more education resources can be uploaded to the platform, so that other personnel can search the required teaching resources by logging in the distance education platform. Distance education can give full play to the advantages of the Internet and realize the sharing of resources across the country. (3) Convenient teaching exchange distance education increases the teaching exchange between teachers, which is conducive to teaching reform; it can also stimulate students' enthusiasm for learning, search for interesting knowledge on the platform of distance education, which fully embodies the teaching purpose. (4) The distance education platform for centralized teaching resources focuses on multimedia files, makes full use of communication technology and storage technology, and efficiently concentrates teaching resources on the distance education platform. Students can log in to the platform to read and watch teaching resources. 3.2
Problems of Distance Education Platform
The development of distance education has become a research hotspot, but there are still some problems in the current distance education platform:
Application of Cloud Computing Virtual Technology in Badminton Teaching
1661
(1) The standards of distance education are not uniform: there are no institutions to specify the standards of distance education, and each distance education platform cannot communicate effectively, and each distance education platform is not compatible with each other. (2) Poor resource sharing: due to regional differences, cultural differences, and technical realization, it is difficult to share all teaching resources in the distance education platform. (3) Poor interaction: distance education should not only provide learning and sharing teaching resources, but also pay attention to the communication between students and teachers and students. However, the existing distance education platform does not pay enough attention to this aspect. (4) Poor pertinence: at this stage, the distance education platform can't arrange courses specifically for students' foundation, and can't carry out teaching in a targeted way like teachers. (5) Low level of development: the development of distance education software lacks long-term planning, repeated development, and high-quality products. 4. Distance Education Platform Based on Cloud Computing Technology Distance education technology has broken the limitations of region and time and space, and its advantages are high utilization rate of shared resources, diversified teaching methods, autonomous learning of students and intelligent teaching management. Most of the distance education platform construction adopts B/S mode, which has the advantages of technical support, convenient access, and easy management. The distance education platform based on cloud computing technology is mainly composed of logical structure design, overall structure design, core module design, scheduling mechanism design, and data security design.
4 Logical Mechanism Design of the Platform The logical thinking of the design of the distance education platform based on cloud computing technology is: all the distance education learning centers are combined into a “cloud”, the distance education platform can intelligently select the best path to transmit data resources when one of the servers fails, the platform can also seek services from other servers; the actual situation among the modules Now efficient resource sharing is realized, and the optimization of resource access is realized through an algorithm; users only need to register an account in the platform to enjoy all resource sharing services. This platform design utilizes all resources in the “cloud” and provides high-quality and efficient shared services, and provides different types of services for users, improving the application of the platform. The logic mechanism design is shown in Fig. 1.
1662
F. Xin
Fig. 1. Logical structure of distance education platform based on Cloud Computing Technology
4.1
Overall Structure Design of the Platform
The overall structure of a distance education platform based on cloud computing technology is mainly composed of the infrastructure layer, application layer, and service layer: The infrastructure layer is mainly the resource concentration of the distance education platform and uses software, hardware, and virtualization technology to ensure its normal operation. The infrastructure layer provides storage and computing support for other layers, which is the cornerstone of the whole distance education platform. The application layer is the core of the platform, mainly including the access and control module, management module, data search, an extraction module, and electronic signature module, and provides a functional interface for customers. The service layer includes remote education services such as database services and web system services. The original purpose of particle swarm optimization (PSO) is to study the foraging behavior of birds. It is the tracking behavior found by some foreign researchers in the process of birds' flight. This behavior can change the direction of birds' flight, maintain certain integrity of birds, and there is a certain optimal distance between each individual. The basic algorithm is: ðkÞ
ðkÞ
ðkÞ
ðkÞ
tkij þ 1 ¼ tkij þ c1 r1 ðpbestij xij Þ þ c2 r2 ðpbestij xij Þ ðk þ 1Þ
xij
ðkÞ
ðk þ 1Þ
¼ xij þ vij
ð1Þ ð2Þ
In the above expression, it represents the location of the current target, represents the instantaneous speed of the target, and best represents the optimal solution. To make the data easier to identify, we can use the particle position x to represent the particle itself, and the current particle velocity v to represent the “flying” direction and distance. 4.2
Core Module Design of the Platform
The core module design of the distance education platform based on cloud computing technology is mainly composed of a management module, access, and control module, data search and extraction module, and electronic signature module, and each module has a certain ID match of the renter, through which the renter seeks services from these
Application of Cloud Computing Virtual Technology in Badminton Teaching
1663
modules. The management module configures system parameters and electronic document parameters according to ID for each renter. The access and control module divides users into three categories: manager, loaner, and user. The module performs resource management function through authorization according to the loaner ID. 4.3
Platform Scheduling Mechanism Design
According to the resource types of distance education platform, the platform scheduling mechanism can be divided into three layers: infrastructure services (PaaS), platform services (PAAS), and software services (SaaS). Infrastructure services (PaaS) provide the most basic infrastructure, such as processor, memory, and storage; platform services (PAAS) provide a service-oriented storage environment for users’ needs in the upper part of infrastructure services; software services (PaaS) are in the upper part of platform services (SaaS), and its structural framework is shown in Fig. 2. Users can use web services to use cloud computing services provided by infrastructure services and access applications provided by software services.
Fig. 2. Cloud computing service framework
The distance education platform based on cloud computing technology also refers to the B/S structure and uses SaaS services provided through the web. SaaS service of distance education platform can be divided into the interface layer, balance layer, application layer, and database layer. Users can log in through the browser access interface layer, the balance layer will provide system resources suitable for users, the application layer will provide configuration services, use services, and security services, and the database layer will ensure the safe and efficient use of the system data. Through the SaaS service process of software services, the scheduling mechanism of remote education platform based on cloud computing technology is shown in Fig. 3. The rule engine determines the working mode of the workflow engine according to the business rules, and the workflow engine will load and execute new processes.
1664
F. Xin
Fig. 3. Dispatching mechanism of distance education platform
4.4
The Data Security Design of the Platform
The database maintenance and data management of the distance education platform based on cloud computing technology are unified by cloud computing operators. Users can only access the data and cannot modify and manage the data, which ensures the security of the platform data. To ensure the security of the user’s information, it is necessary to encrypt the user's information. In this study, the sensitive information of the user is protected by changing the storage mode and SaaS application. As shown in Fig. 4, if you want to transfer the data from the original server a to the new server B, you need to download the data from the original server a to the user C’s own storage space, and then the user can copy the files from his own storage space to the new server B so that the server a loses the right to obtain the sensitive information of the user C, and Server B also lost access to sensitive information of user C. Loading different DLL files here is the key to ensure the information security of users.
Fig. 4. Improved data migration diagram
The function module of the distance education platform module based on cloud computing technology is shown in Fig. 5. The system management module is the control module of the whole platform, which can monitor the operation status of each module, optimize the user registration information, access status, resource storage, and sharing, and use the equalizer to achieve efficient use of resources.
Application of Cloud Computing Virtual Technology in Badminton Teaching
1665
Fig. 5. Function module of a distance education platform system
5 Conclusion Distance teaching can improve teachers’ teaching ability, make teachers master teaching skillfully, choose teaching content reasonably, promote the development of multimedia teaching in badminton teaching, give full play to the advantages of distance teaching, improve students' sports culture knowledge, spread the culture of badminton, and enable students to learn badminton in an all-round way. The combination of distance teaching and traditional physical education teaching methods, reasonable use of two teaching methods, improve the teaching method of feather teaching, promote the development of modern means of physical education teaching.
References 1. Liu, X., Xu, H.: Application of cloud computing in a distance education system. Modern Distance Educ. 5, 64–67 (2010) 2. Ji, C.: Research on distance education model based on mobile cloud computing. J. Jilin Educ. Inst. 27(1), 135–136 (2011) 3. Feng, J.: The outlook of modern distance education is based on cloud computing. China Audiov. Educ. (10) (2009)
The Realization of the Apriori Algorithm in University Management Zhang Ruyong(&) and Song Limei Shanong Institute of Commerce and Technology, Jinan 250103, Shandong, China [email protected], [email protected]
Abstract. In this paper, the Apriori association rules mining algorithm is studied in-depth, and the algorithm is realized by MATLAB. To further avoid blind search in the process of mining and improve the efficiency of frequent itemset search, the improved technology of the Apriori algorithm is studied, and the application of the improved Apriori algorithm in teaching management is studied. The proportion of application and research data mining in the teaching management system of colleges and universities is still low, especially in the management of students’ scores, because there are many students, the amount of students’ scores that need to be counted is large. Keywords: Apriori algorithm
Data mining Management
1 Introduction The purpose of association rules is to find the relationship between items in a data set, also known as market basket analysis. The most famous example is the story of “diaper and beer”. Association rules have a wide range of applications. In terms of commercial sales, association rules can be used for cross-sales to get more revenue; in terms of the insurance business, if there is an unusual combination of claims, it may be a fraud and further investigation is needed. In terms of medical treatment, we can find out the possible treatment combination; in terms of banking, we can analyze customers, recommend services of interest, etc. Apriori algorithm is the most classical algorithm in association rules [1]. There are many information systems in university management. These information systems provide convenience for our normal business flow, but what role does the data generated by these business systems have for our management? Can we also find some corresponding connections to play some role in the improvement of our daily management work and decision support? This idea is in our university management It is relatively scarce. The purpose of this system is to analyze the correlation of the data produced in the teaching system, draw the conclusion, and submit it to the management as the basis of decision support [2, 3].
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1666–1671, 2021. https://doi.org/10.1007/978-981-33-4572-0_245
The Realization of the Apriori Algorithm in University Management
1667
2 Background The overall design idea of this system is to apply the improved Apriori algorithm to the university management information mining. In the university management system, there is usually a large amount of data. Firstly, we have to collect the data, then organize and filter the data, and store it in the database. Then we use the improved Apriori algorithm to extract the association of these data in the data mining technology, forming Rough results, and analysis of the results, and then make some adjustments, and finally form decision support for the final results, presented on the computer. Therefore, the system is roughly divided into the data acquisition module, data sorting and filtering module, data processing module, result correction, and a presentation module. The application structure design of the improved Apriori algorithm in university management is shown in Fig. 1.
Fig. 1. The application structure design of the improved Apriori algorithm in University Management
The system workflow is described as follows: 1) data acquisition, and preprocessing. This module is used to analyze and collect much data in university management, analyze the data that is helpful to the system, extract it, and adjust the format of these data so that it can be used in the system. 2) data filtering, and cleaning. The main purpose of this module is to make a preliminary analysis of the extracted data. After the analysis, the useful part of the data is filtered, and the unused part is deleted. As the data source, the data collected by data cleaning mainly complete the two tasks of data cleaning and feature subset selection removes the error and redundant data, and centrally detects the useful data sets. Data preprocessing can improve the quality of the detection data. After selecting feature attributes from the preprocessed records, mining and detection are carried out by using association analysis, sequence analysis, classification, and clustering algorithms. 3) Use the improved Apriori algorithm to analyze the data. The improved Apriori algorithm is used to mine the association data of the filtered and cleaned data, to get the results for our use. 4) Data correction. Due to the deviation between the collected data and the algorithm analysis, the results may appear some small deviations. Therefore, these data that generate the deviation are corrected, and the results are obtained for the second time. If the results are not satisfactory, then
1668
Z. Ruyong and S. Limei
the collected data are corrected, and the desired results are finally obtained. 5) Formal results and presentation. When the data is analyzed, the final results are stored in the warehouse, and the results are presented to the users and management, to provide a strong basis for decision support.
3 Function Module Design The main function modules of the system are described below: 1) Data collection and preprocessing module: This module is composed of data collection and data preprocessing. Data collection mainly comes from the data collection generated by each teaching business system in daily university management. It is mainly extracted from the database. It is necessary to analyze the data structure of the business system to make it the data we know, and then transform the data structure, become the data in this system. The database system mainly uses the SQL Server database under windows. Because the functions of university management system are more complex and the manufacturers provided by the system vary greatly, the databases of different business systems may be Oracle or SQL Server, or MySQL, or even desktop database or other types of database, so it’s not easy to achieve convenient and flexible data collection and preprocessing. Here, we use the way of ODBC heterogeneous database interworking to correspond with the logarithmic data structure from different databases, and then import or extract it into the current database. This module realizes the work The quantity is relatively large. 2) Data filtering and cleaning module: the module takes the extracted data as the data source to analyze these data. Because the data needed for different feature extraction is different, the main purpose of the module is to filter the data that is not suitable for the feature, find out the data we need, and remove the redundant data and the data with the weak feature, and finally, The results are stored in the middle database, and the user data are classified to achieve the effect of data cleaning. 3) Data analysis module: data analysis is the main function module of the system. Its task is to take the data in the intermediate database as the data source, and then use the improved Apriori algorithm to analyze these data once or several times, to obtain the hidden relevance in the data, to provide services for our decision-making.
4 The Realization of the Data Analysis Module The system analysis module is the key content of the system, the main function is how to realize the relevance analysis of the data in university management. The main task of the system is to use the improved apriori algorithm to extract and analyze the data, and form the decision basis [4]. 4.1
Implementation Process
The data analysis process is shown in Fig. 2.
The Realization of the Apriori Algorithm in University Management
1669
The pseudo-code implementation process of data analysis using the improved Apriori algorithm is described as follows: 1) 2) 3) 4) 5) 6) 7)
initialize the intermediate database connection and check the availability of data; analyze the database; get the minimum support threshold min_ sup; The first scan obtained frequent sets and constructed a 0–1 matrix; get candidate set 2; Connect pruning to get the next candidate set; repeat steps 4 and 5 until the candidate set is empty. algorithm completed.
Fig. 2. Data analysis process.
1670
4.2
Z. Ruyong and S. Limei
Modeling Process
To describe aprioni’s algorithm application process more clearly, data mining is carried out based on the achievement modeling in the first five semesters of a major in a college, to demonstrate the modeling process of the algorithm. 1. Data acquisition, preprocessing, and cleaning: First, collect data, and eliminate noise and inconsistent data. For the record of lack of achievement, the average score of the subject shall be used; for the record of more achievement, the score of the first time shall be used 2. Data integration and data filtering: If there are multiple data sources, you need to combine them first. In this example, a single data source is used, only the scores of each semester need to be collected in one file (*. X1s) 3. Data selection: Extract the data related to the analysis task from the database. For this experiment, the credits of subjects, student ID, and name of students do not affect data analysis. However, our information management major is a comprehensive major. The courses we have learned can be roughly divided into basic courses, management courses, and computer courses. This experiment is mainly aimed at the analysis of computer courses. Therefore, only such courses are retained in the data source Results. 4. Data transformation and construction of 0, 1 matrix: Transform the data into a form suitable for mining. Apriori algorithm is a Boolean association rule algorithm, so continuous student scores should be changed into discrete Boolean data (0, 1). Due to the different scoring standards among various subjects, the conversion method of this experiment is 1 if the score is greater than the average score of the subject; 0 if the score is less than the average score of the subject. 5. Data mining: Use intelligent methods to extract data patterns. The steps are as follows: 1) Import the data source (information management score. XLS) into the stream; 2) Establish the type of data source: set the type to discrete type and the direction to both; 3) Import the apriori model, set the front item and the back item as all items, set the minimum support level as 35%, the minimum confidence level as 85%, and the maximum number of front items as 3. 6. Model assessment and data revision: According to a certain measure of interest, identify the really interesting patterns expressed. Although there is a great relationship between some courses, such as the introduction to e-commerce and the principle of computer composition, there is no very close relationship between them. One is partial to network application, the other is partial to hardware composition. Similarly, project management and VB.NET between. 7. Reconstruct knowledge representation: The initial mining results are represented by a mesh graph. 8. The analysis results and results show that C programming, discrete mathematics, data structure, and many other disciplines are related. Therefore, they should be the basic disciplines and open earlier. Java, introduction to e-commerce, C++, and other disciplines are less connected. Java and C++ are relatively deep-seated disciplines, while the introduction to e-commerce is application-oriented, so it should be opened later. However, the practical basis of the computer has little connection with other
The Realization of the Apriori Algorithm in University Management
1671
disciplines, which indicates that its learning will not affect the learning of other disciplines. It may be because, with the popularization of the computer, everyone has mastered its content before learning this course. Therefore, it may be considered not to open this subject in the future to save resources.
5 Conclusions Through the design and implementation of this chapter, we divided the program into four parts: data collection, data filtering and cleaning, data analysis, results in presentation, and described each part in detail. At the end of this chapter, we give the code implementation of MATLAB. Through the analysis, we know that the improved apriori algorithm can effectively find out the association rules existing in the dataset and the association rules of the largest item. Acknowledgments. This research is a part of the results of the Chinese National Education Science Planning Project (DIA160317) of the Ministry of Education of PRC.
References 1. Zhang, C., Zhang, S.C.: Association rule miring model and algorithms. Springer 2307, 33–39 (2002) 2. Agrawal, R., Shafer, J.C.: Perall mining of association rules. IEEE Trans. Knowl. Data Eng. 8 (6), 962–969 (1996) 3. Agrawal, R., Imielinski, T., Swami, A.: Mining association rules between sets of items in large databases. In: Proceedings of the ACM SIGMOD Conference on Management of Data, pp. 207–216 (1999)
Study on the Export of BP Neural Network Model to China Based on Seasonal Adjustment Ding Qi(&) Rizhao Polytechnic, Rizhao 276826, China [email protected]
Abstract. In this paper, the export volume, real exchange rate, China’s GDP, America’s IPI, and their seasonal variables are used as the determinants. Three methods, BP neural network, ARIMA, and AR-GARCH, are used to model and predict the export volume of China to the United States. Select the error-index, and compare the simulation results and prediction results of the three models with the real values. The results show that the three models are satisfactory. Although there are some differences in simulation and prediction ability, the ARIMA model has obvious advantages. This paper analyzes the causes of the above results and puts forward suggestions for improving China’s export based on the model. Keywords: BP neural network
ARIMA AR-GARCH Export forecast
1 Introduction Export trade is one of the driving forces for China’s rapid economic growth. As the largest export trade partner of China, the United States has a huge impact on China’s economic development. However, influenced by the global economic crisis and the constant appreciation of RMB against the US dollar, China’s export growth began to slow down or even show negative growth. Therefore, it is very important to model the export of China to the United States, find out the influencing factors quantitatively, and predict and take measures to improve the export volume of China.
2 Basic Principles of Three Models 2.1
The Basic Principle of BP Neural Network
An artificial neural network is the simulation of a biological neural network system. Its information processing function is determined by the input and output characteristics (activation characteristics) of the network unit and the topological structure (connection mode of neurons). BP network is a multilayer feedforward network of error backpropagation, which is the most representative and widely used network in an artificial neural network. When training a BP network, we should take the same series of input and ideal output as the “sample” of training, and train the network according to a certain algorithm [2, 3]. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1672–1678, 2021. https://doi.org/10.1007/978-981-33-4572-0_246
Study on the Export of BP Neural Network Model
1673
When the training is completed, the model can be used to solve similar problems. BP network needs a training set and a test set to evaluate its training results. The former is used to train the network so that the network error can meet the specified requirements, and the latter is used to evaluate the trained network performance. See the empirical analysis for specific steps. 2.2
The Basic Principle of the ARIMA Model
ARIMA model, also known as the differential autoregressive moving average model, is an extension of the ARMA (p, q) model. An ARIMA (m, d, n), AR is “autoregression”, m is the number of autoregressive terms; Ma is “sliding average”, n is the number of sliding average terms, and d is the difference times (order) to make it a stationary sequence. After d difference, ARIMA (p, d, q) model can be expressed by the ARMA model. The expression is as follows: r1 ¼ c þ
2.3
Xm
/r þ i¼1 i i1
Xn j¼1
aj ytj et
ð1Þ
The Basic Principle of AR-GARCH Model
To describe and predict the volatility clustering phenomenon of economic time series, Engle proposed the famous autoregressive conditional heteroscedasticity model (ARCH model) in 1982. Because the arch model is a short memory process, to better describe some financial market phenomena with long memory process, Bollerslev generalized the ARCH model in 1986, and added the lag period of residual conditional variance, thus derived the generalized autoregressive conditional heteroscedasticity model (GARCH model). AR-GARCH is to add an autoregressive term to the average expression of the ARCH model. The model can be expressed as: r1 ¼ c þ r2t ¼ x þ
Xp
Xm i¼1
/i rti þ et
a r2 þ i¼1 i ti
Xq j¼1
ð2Þ bj r2tj
ð3Þ
3 Application and Empirical Analysis of the Model 3.1
Selection of General Variables
In the literature of exchange rate export research, the most commonly used explanatory variables of the export function are: the real export volume in the later period; the real exchange rate; the GDP of the exporting country, which is used to measure the export capacity of the exporting country; the GDP of the importing country (GDP) or other alternative variables (such as the industrial production index IPI), which is used to measure the import capacity of the importing country.
1674
D. Qi
In general, the actual export volume in the later period is one period lagging. This paper also chooses the variable of China’s export volume to the United States which is one period lagging. The actual exchange rate is calculated by the formula: Actual exchange rate = nominal exchange rate foreign price/local currency commodity price. Foreign prices and local currency commodity prices are represented by the consumer price index (CPI) of the two countries. The GDP data of China and the United States, in fact, only have quarterly data, but the sample unit time interval to be predicted in this paper is the month (if the unit interval is the year, the time to collect data is short, and the sample size is small), so the author tries to replace it with monthly IPI data. Since there is no IPI statistical data in China since 2006, this paper can only average the quarterly data of China’s GDP to three months to get the monthly GDP data, while the GDP data of the United States is replaced by IPI data. 3.2
Data Source and Data Processing
This paper selects the monthly data from January 2005 to August 2008 as the sample data to model China’s export to the United States. The nominal exchange rate of RMB against the US dollar is from the website of safe; the data of China’s export trade volume to the United States and China’s consumer price index (CPI) are from the Ruisi database; the CPI data and industrial product ex-factory price index (IPI) of the United States are from the website of the Bureau of labor statistics and the Federal Reserve. After adjustment, the base period is January 2005. When calculating the real exchange rate, it is necessary to calculate the ratio of CPI indexes of China and the United States in each period of the sample period. Since the CPI index of the United States is a ring index, while the CPI index of China (adopted in this paper) is a year-on-year index, we first need to convert the CPI data of the two countries into the CPI base index based on the CPI index in January 2005. 3.3
Establishment of BP Neural Network Model for Export Forecast
To establish a neural network model and complete training and learning, it mainly includes three stages: configuration stage, training stage, and output stage. (1) Configuration phase Selection of input nodes Six variables need to be considered, and we will take these factors as the input nodes of the BP model. When inputting the input node, the indicator data should be normalized. The normalization formula is: X0 ¼ ðx xmin Þ=ðxmax xmin Þ. Selection of hidden layer number and hidden layer node number The number of hidden layers is one [3]. There is a direct relationship between the number of nodes and the number of input and output units. The formula is pffiffiffiffiffiffiffiffiffiffiffiffi n1 ¼ n þ m þ a, where m is the number of input neurons equal to 6, n is the number of output neurons equal to 1, and a is a constant between 1 and 10. After repeated debugging, the number of hidden layer nodes is determined to be 10.
Study on the Export of BP Neural Network Model
1675
Output node: for the variable to be predicted - China’s export to the United States, normalization is also required here. (2) Training phase In this stage, the training and learning of the samples are completed. For the input information, the neural network should first propagate to the nodes in the hidden layer and then transmit to the output nodes after the Sigmoid activation function operation. x The expression of the Sigmoid function is: y ¼ 1= 1 þ e b , b changes adaptively according to the sample. The trail function of MATLAB and Levenberg-MagqMardt rules are used to train the forward network. Absolute mean percentage error (MAPE) is used as the error standard of the test sample. As mentioned before, the BP model established in this paper has 6 input neurons, 10 hidden layer neurons, and 1 output neuron; in the experiment, the learning step length is 0.06, the number of training is 1000, and the acceptable error standard e0 = 0.001. 3.4
I/O Stage
This paper selects 44 months’ data from January 2005 to August 2008 as sample data. However, there are only 32 valid samples because there are 12 lag variables in the decision variables of the model. Take the export amount and other relevant data in the sample period as training samples and testing samples respectively (the first 26 periods are training samples, which just meet the sample size requirements of 3 * k + 8, and K is the number of explanatory variables 6. The latter six stages were used as test samples). After inputting the sample, the system learns according to the minimization rule of the square sum of the expected output and the actual output error and adjusts the weight matrix and the threshold vector. After 20 times of learning and training, the error of the model is reduced to the required range, and the system stops learning. By running the Matlab program for 1000 times, a network with the best simulation degree (that is, the network with the smallest error) is obtained as the final model of the neural network method. The prediction results of the test samples are shown in the Fig. 1.
Fig. 1. Comparison between the normalized export predicted value and the actual value in the detection period of the neural network model
1676
D. Qi
Fig. 2. Comparison between the predicted export value and the actual value after reduction in the detection period of the neural network model
It can be seen from the Fig. 1 that the predicted result of the model for China’s export to the United States is very close to the actual value, and the effect is ideal. From Fig. 2, it can be seen that the average error of the normalized prediction sample is 0.0753, and the average error of the restored prediction export amount is 0.0365, that is to say, theoretically, the difference between the predicted export amount and the real value by the neural network method will not exceed 3.65%. Using this neural network, we predict the export volume in September 2008. The predicted value is 22967000 US dollars. Compared with the real export volume of 24683579 US dollars, the predicted value is 6.821% less than the real value. The prediction is good. Shown in Fig. 3.
Fig. 3. The normalized prediction value in the detection period of the neural network model and the comparison between the predicted value and the actual value after reduction
Study on the Export of BP Neural Network Model
3.5
1677
Establishment of the ARIMA Model for Export Forecast
Estimate the ARIMA model. First, the unit root test (ADF test) is used to test whether the original sequence of the export volume is stable. The results show that the export sequence is stable after the first-order difference. Using the above model to predict the export volume of China to the United States in the sample period, this paper uses the static forecast in Eviews. The Fig. 4 below shows the comparison between the predicted value series (red line) and the real value series (blue line) of the export volume. It can be seen that the estimated ARMA model has a good effect on the estimation of the export value. The red line is close to the blue line, and the red line almost lags behind the blue line at the turning point, because the predicted value is obtained by the decision variable of the lag. In the sample period, the average absolute error of the model is 5.50314%, which is larger than 3.65% when the neural network method is used to predict and detect samples. From this index, we can see that the prediction result of the neural network method is better than that of the ARIMA model.
Fig. 4. Comparison between the real value and the predicted value of the export volume in the sample period of the ARIMA model
4 Conclusion In this paper, the export volume, real exchange rate, China’s GDP, America’s IPI, and their seasonal variables are used as the determinants. Three methods, BP neural network, ARIMA, and AR-GARCH, are used to model the export volume of China to the United States, and the next period of data outside the sample interval is predicted. Then, the absolute mean percentage error (MAPE) is used as the error-index to compare the simulation results in the sample period and the prediction results in the prediction period with the real values. The results show that the three models can well simulate and predict the export volume of China to the United States. Among them, BP neural network model can
1678
D. Qi
better fit the export volume in the detection period, while ARIMA and AR-GARCH models have similar results, and they have a very good prediction effect on the export volume in the prediction period. This is the opposite of the discussion that the neural network method is better than the time series method. This is because more seasonal factors are considered in this paper than in the previous literature. The established linear model can explain the fluctuation of export volume very well, while the results of the neural network method have greater randomness and contingency.
References 1. Cui, J., Li, X.: Stock price prediction: comparison between GARCH model and BP neural network model, statistics and decision 6 (2004) 2. Hao, L., et al.: Research on the artificial neural network model of credit risk analysis of commercial banks. system engineering theory and practice, No. 5 (2001) 3. Wang, Y.: The application of ARIMA model in the prediction of China’s export trade. statistics and decision 4 (2004)
Big Data Analysis of Tourism Information Under Intelligent Collaborative Management Li Sheng1(&) and Weidong Liu2 1
2
Xi Jing University, Xi’an, Shaanxi, China [email protected] College of Computer Science, Inner Mongolia University, The Inner Mongolia Autonomous Region Hohhot College Road No. 235, Hohhot, China
Abstract. Given the problems of low accuracy and long time delay in the traditional intelligent collaborative processing of tourism information, this paper proposes an intelligent collaborative method of tourism information based on big data analysis. It analyzes the intelligent collaborative system of tourism information under big data, analyzes the non-linear time series of tourism information collaboration according to the analysis results of time-series network data, and fits the results according to big data Based on the analysis of the data law, the characteristics of collaborative tourism, such as tourism characteristics, tourism similarity, tourism filtering and so on, are calculated to improve the timeliness of the system Experiments show that the intelligent collaborative method of tourism information based on big data analysis has strong antiinterference ability and timeliness, and the strategy of studying and judging the massive information of big data analysis is high. It can classify the tourism information quickly and accurately, and extract the tourism products needed by users. The design of the system meets the needs of the tourism system for the processing of big data analysis information. Keywords: Big data analysis collaboration System design
Tourism information Intelligent
1 Introduction With the continuous update of network technology, big data analysis has been more and more recognized by people, and a lot of data information has been integrated into the Internet. In the past decade, the Internet has been closely related to people's life and work. People put a lot of time and energy into the Internet network, and the information in big data analysis contains a lot of tourism products.
2 Construction of Interregional Coordination Mechanism for the Coordinated Development of Regional Tourism 1) establish cross Administrative Region Tourism Organization and coordination organization and its operation mechanism. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1679–1685, 2021. https://doi.org/10.1007/978-981-33-4572-0_247
1680
L. Sheng and W. Liu
In the case of the segmented management system and focusing on local interests, there is a special need for a coordination mechanism with regulatory capacity. Effective cooperation among different tourism areas requires a smooth contact mechanism, organization and coordination mechanism, and reasonable organization and coordination mechanism [1]. It is suggested to establish a functional Tourism Organization and coordination organization, which is specially responsible for research and planning, overall planning, contact and communication, the guidance of implementation, information service, policy and regulation consultation, etc., to promote and guide the all-round, multi-level and high-efficiency comprehensive cooperation of interregional tourism, and to establish an effective operation mechanism of tourism organization. (2) strengthen the regional tourism planning work and the construction of tourism planning implementation system under the framework of macro development. The coordinated development of regional tourism is a kind of operation across administrative regions. In terms of the development direction of the tourism development coordination region and its position and function in the division of labor and cooperation system, there must be a clear, guiding, and binding “regulatory” plan. This kind of “regulatory” plan comes from scientific and reasonable regional tourism planning, which is beyond the limits of administrative boundaries. The key tourism areas are planned according to the conditions of coordinated development, and the implementation of tourism planning is ensured through specialized agencies.
3 Tourism Information Intelligent Collaborative System Based on Big Data Analysis Big data analysis is the basic means of a modern big data information processing system. Big data analysis cloud storage computing system can not only be used for data storage but also be able to conduct in-depth mining and Research on data The tourism system based on big data analysis covers the characteristics of the automatic collection, analysis, and service of tourism, providing accurate tourism support for users. Construction based on big data analysis [2, 3]. Big data analysis is an important source of tourism information and the first step to obtain tourism information How to analyze and study the information in big data analysis is the basis of system design. For the intelligent collaborative method of big data tourism information, one of the main problems is to provide accurate tourism data and describe the data information needed by continuous iterative calculation [4–6]. 3.1
Time Series Network Data Analysis
Assuming that F: R ! Rm is the data information mapping in big data analysis, under the given condition F, the information in big data analysis can be iterated, and the problem algorithm to be solved can be kept in continuous iteration, then f ja ¼ f ja. Assuming that the nonlinear time series in big data analysis is {x (T), t = 1, 2,…, N}, the collaborative parameters of tourism information data can be obtained from the nonlinear coordinate space reconstruction of network data information
Big Data Analysis of Tourism Information Under Intelligent Collaborative Management
xðnÞ ¼ ½xðnÞ; xðn tÞ; ; xðn ðm 1ÞtÞT
1681
ð1Þ
Among them, m is the collaborative embedding parameter of tourism data information, t is the delay time of network data analysis of tourism information. According to the linear time analysis in big data analysis, there is a smooth mapping of tourism information, as shown in formula (2): Xðn þ TÞ ¼ w½XðnÞ
ð2Þ
According to the different ways of synthesizing various kinds of tourism information needed in big data analysis, the collaborative algorithm of studying and judging the nonlinear time series of tourism information can also predict the overall tourism information support differently [3]. 3.2
Cooperative Analysis of Large Tourism System with Nonlinear Time Series
For the big data of big data analysis and the non-linear time series calculation and analysis of tourism collaboration, all the big data must be fitted to analyze the laws of the data needed by tourism and find the mapping laws of network data. Analyze the characteristics of collaborative tourism according to the laws of data. Calculate through the given Tourism information, namely: N X
½Xðt þ 1Þ wðXðtÞÞ2
ð3Þ
i¼0
In the big data analysis, the big data is coordinated in the form of tourism information, and the minimum value of the calculated time is w = R ! Rm. The coordinate value vector in the big data analysis is represented by CFR. The center point of the f-th cluster of the tourism information coordination center is g Dimension analysis attribute. Gd represents the quadratic error sum of the tourism information data cluster. The optimal relationship between Tourism Information Collaborative objective function and big data analysis design change vector is as follows: min Gd ¼
mg d X 2 X ðgÞ af cfr
ð4Þ
g¼1 f ¼1
When all data and information objects in big data analysis have the attributes of tourism information data, each tourism feature collaboration center in the network data and information cluster is represented by the formula: P d;2rg dg Zg ¼ g 2 ½1; f ð5Þ Rg Among them, Rg represents the g-thin big data analysis. The number of tourism feature collaboration data of data information clusters. When the tourism feature collaboration center repeatedly calculates the information data in big data analysis until the tourism collaboration does not change, this kind of tourism algorithm ends. When there
1682
L. Sheng and W. Liu
are fewer types of data and information in big data analysis, the higher tourism information data collaboration mode is used to carry out the whole domain of all network data and information Analysis and judgment. When there are many kinds of data and information in big data analysis, the calculation amount of this analysis and judgment method is huge. To facilitate tourism search and speed up the operation of tourism information intelligent collaborative method, formula (6) is used for regression nonlinear calculation [4]: xðt þ 1Þ ¼
m1 X
axðt iÞ ¼ A
ð6Þ
i¼0
In the formula, A represents the unknown tourism category part of tourism feature collaboration in the nonlinear calculation of data information in big data analysis. ai (I = 0, 2,…,m-1) can be calculated by analyzing and calculating the information sequence in big data analysis with nonlinear time, which makes the data information of big data analysis using the shortest time in part A of the process of tourism feature collaboration, and makes the nonlinear time series of data information research get the smallest tourism feature error. Qd ¼
N h X
xðtÞ
X
axðt i 1Þ
i2
ð7Þ
t¼m
To solve the tourism information sequence in big data analysis, we can: m1 X X a xðt iÞxðt jÞ ¼ i¼0
N h X
xðtÞ
X
i2 axðt jÞ ; j ¼ 0; ; m 1
ð8Þ
t¼m
The analysis of nonlinear time series can produce different results of large-scale tourism feature collaboration. Tourism research collaboration is the simplest way of tourism information collaboration, and the tourism support degree is close to 100%. The further information nonlinear time series of big data analysis is expressed as: xðN ¼ TÞ ¼ a þ
X
aðNxðN ði 1ÞtÞ ¼ a þ AðNÞXðNÞ
ð9Þ
i¼1
Where A(N) = [a1, a2,…am]. Big data analysis information is obtained by simplifying the nonlinear coefficient: FðNÞ ¼
k X i¼1
jxðN þ TÞ a AðNÞXðNÞj2
ð10Þ
Big Data Analysis of Tourism Information Under Intelligent Collaborative Management
1683
4 The Realization of Collaborative Processing of Tourism Information The coordinated development of regional tourism discussed in this paper refers to the mode of regional tourism development in which each tourism area unit (subsystem) in the region cooperates and coexists, forms an efficient and highly ordered integration, and realizes the “integrated” operation of each tourism area unit in the region. The tourism regional system of coordinated development has a unified joint and cooperative tourism development goal and tourism planning; there is a high degree of coordination and integration between tourism regions, forming a unified regional tourism market, optimizing the integration of tourism resources, optimizing the combination of tourism products, and a rigorous and efficient organization, coordination and operation mechanism; the internal tourism regions are equal and open to each other, At the same time, it is also open to the outside, so that the coordinated development of regional tourism system can form a coordinated and unified tourism system, which is not only conducive to the development of the internal tourism area subsystem but also conducive to the docking and interaction with the external tourism area system. The collaborative processing of tourism information based on big data analysis uses the MySQL database to organize, analyze, and calculate all the information in big data analysis. The system maintains the timely, effective and fresh features of tourism products by querying, analyzing, studying, and judging the information in big data analysis The tourism system processes the information in all kinds of big data analysis, and finally forms an effective tourism product to show to the users. Therefore, the information in big data analysis must be collected, preprocessed, classified, studied, and judged in multiple links Adopt the method of tourism information intelligent collaboration to collect the big data analysis information, summarize all the collected data information, transmit it to the collection module of the tourism system, filter the data information. Treat the data information larger than 100 m as video data, and treat the data information smaller than 100 m as video data 1K files can be deleted directly to reduce the calculation time of the system. By setting a blacklist, the user information data of a specific IP segment can be directly filtered as garbage information to reduce the calculation time of the system. Feature codes can be set for all kinds of tourism features to directly classify the processed data and information.
5 Analysis of Experimental Results To verify the effectiveness of the proposed intelligent collaborative method of tourism information under big data analysis, this experiment evaluates the advantages of the proposed method by testing the accuracy and time delay of its information collaborative processing and carries out the information collaborative process delay experiment by seven experiments with the same preconditions. The experimental results are as follows in Fig. 1: It can be seen from the experimental results above that the proposed method has a lower delay. In the process of repetitive experiments, the delay of traditional methods is
1684
L. Sheng and W. Liu
Fig. 1. Comparison of information cooperative delay of different methods
about 80 ms to 100 ms, which can not meet the current information processing requirements. The delay of the proposed method is 15 ms to 40 ms This greatly improves the efficiency of the tourism information intelligent collaborative process and shows the effectiveness of the proposed method (Fig. 2).
Fig. 2. Error graph of tourism information intelligent collaboration under different methods
Big Data Analysis of Tourism Information Under Intelligent Collaborative Management
1685
According to the above experimental results, it can be compared that the proposed method has a lower error in the process of tourism information intelligent collaboration, which further proves the practicability of the proposed method.
6 Conclusions Under the background of global economic integration and regional economic integration, cooperation has become the trend of world development, and regional tourism cooperation has shown a rapid development trend. The coordinated development of regional tourism is an advanced form of regional tourism cooperation, which conforms to the direction of regional tourism development and has a profound theoretical basis. It is of great significance to study the mode and path of the coordinated development of regional tourism. In future research, the theoretical basis of the coordinated development of regional tourism should be clarified in theory, and the mechanism of the coordinated development of regional tourism should be established;
References 1. Xie, R., Shaohua: Collaborative information services in the tourism network information ecosystem. Modern Intell. 36(11), 71–75 (2016) 2. Wu, X., Chen, C., Liu, X., et al.: Integrated beacon indoor positioning cultural tourism virtual navigation system. Comput. Eng. 42(10), 6–11 (2016) 3. Xu, F., Li, S., Qi, X.: Reconstruction of the tourism system model in the context of big data. Tourism Sci. 30(1), 48–59 (2016) 4. Ji, P., Li, S., Lu, S., et al.: Personalized tourism route customization system based on Semantic Web. Comput. Eng. 42(10), 308–317 (2016) 5. Lu, G., Huang, X., Lu, S., et al.: The recommendation of multi-objective tourism routes based on Internet information. Comput. Eng. Sci. 38(1), 163–170 (2016) 6. Peng, X., Xie, Y., Dang, S.: Construction of spatial information service workflow for tourism planning. Surv. Mapping Sci. 41(12), 124–129 (2016)
Construction of Intelligent Management Platform for Scientific Research Instruments and Equipment Based on the Internet of Things Dong An(&), Zhengping Gao, Xiaohui Yang, Yang Guo, and Yong Guan Inner Mongolia Electric Power Science & Research Institute, Hohhot, China [email protected], [email protected], [email protected], [email protected], [email protected]
Abstract. With the rapid development of scientific research capacity building and the increasing investment in scientific research infrastructure by the state, the fixed assets management mode based on financial management is far from suitable and can not meet the reality in the face of the huge instrument and equipment asset information, function information, use record, and other data and the lack of a unified and timely instrument information sharing platform. This paper proposes an intelligent management platform for scientific research instruments and equipment based on Internet of things technology to improve the sharing degree and resource utilization efficiency of instruments and equipment in the company internal, to effectively improve the fine management level of instruments and equipment and better serve the scientific research management. Keywords: Internet of things equipment Management
Scientific research Instruments and
1 Introduction Instruments and equipment are the necessary technical conditions in scientific research institutions. With the rapid development of scientific research capacity building and the increasing investment of the state in scientific research infrastructure, the research institute has more and more instruments and equipment, and the equipment is increasingly sophisticated, comprehensive, and complex. At present, the instrument and equipment are operated and maintained by each user department, which can not get good maintenance management. And, the management responsibility is not implemented together with its right to use, which results in that the user has no pressure and power to maintain the instrument and equipment and improve its utilization rate. Although a certain proportion of operation and maintenance costs are included in the purchase funds, in most cases, there is almost no or not enough follow-up operation and maintenance costs in the actual use, resulting in the acquisition of instruments and © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1686–1690, 2021. https://doi.org/10.1007/978-981-33-4572-0_248
Construction of Intelligent Management Platform for Scientific Research Instruments
1687
equipment, which can not be used well and effectively or get good operation and maintenance [1, 2]. Combined with the Internet of things technology, this paper proposes to establish a set of an intelligent management platform for scientific research instruments and equipment by using networked sensors and network technology, to solve the problems of “who is using”, “how to use” and “where are they” in the management process of instruments and equipment, and monitor the health status, use status and energy consumption, to truly realize the intelligence proposed by the intelligent institute, at the same time, establish a comprehensive resource sharing laboratory and management mode of instruments and equipment, improve the sharing degree and resource utilization efficiency of instruments and equipment in the company internal, to effectively improve the fine management level of instruments and equipment, and better serve the scientific research management.
2 Analysis of the Current Situation of Scientific Research Instruments and Equipment Management For scientific research instrument and equipment management, in the face of the huge instrument and equipment asset information, function information, use records, and other data and the lack of a unified and timely instrument information sharing platform, the fixed asset management mode based on financial management is far from suitable and can not meet the current situation, which will directly affect the response efficiency, utilization rate of the user needs of various departments of the Institute, and greatly increasing the difficulty of equipment inventory tracking, coordinated use, maintenance, and repair management will also easily lead to a repeated application for asset procurement by various departments, resulting in increased procurement costs. Scientific instruments and equipment is an important indicator of technology development ability and scientific research strength, but how to give full play to the utilization rate and benefit of resources has always been an important index of equipment management.
3 Research on Intelligent Management Platform of Scientific Research Instruments and Equipment The Internet of things (IoT) refers to the ubiquitous connection between things or between things and people through various information sensing technologies, identification technologies, positioning systems, and other equipment and technologies, taking any required data information, and then through network access. Through this kind of connection, an intelligent management technology of human to equipment or equipment to equipment can be realized. The Internet of things technology is different from traditional Internet technology. On top of the Internet technology, it increases the information collection and perception of things, so that things can establish connections with each other through the network, which greatly changes the management of things and facilitates people’s lives.
1688
D. An et al.
To realize the platform, first of all, it is necessary to analyze the existing technology of the combination of networked sensors and network technology applied to the management of instruments and equipment, and combine it with the management requirements of the power instruments and equipment required by the actual departments, effectively sort out the whole process and key process of using different instruments between different departments from application for procurement to instrument cleaning and scrapping, and optimize the traditional instruments and equipment data storage model, to support the management of the Internet of things equipment, to realize the digital model analysis of the management of the Internet of things equipment. Then, the indoor positioning system based on the sensor technology is constructed to realize the location and state perception of the instruments and equipment, and combined with the relevant optimization positioning algorithm and regional data transmission processing algorithm to reduce the noise of the data. The specific implementation of the platform consists of three parts: device end, transmission network, and remote management platform. The device end consists of scientific research equipment, instrument label, probe, and companion. The label contains all the digital information of the instrument; the probe is used to detect the use of the equipment and complete the status monitoring; the companion is used to locate the equipment and realize the current monitoring of the active equipment. The transmission network adopts the combination of a power line network and Ethernet. The advantage of the power line network lies in no wiring, plug, and play. The safety and reliability of data transmission are fully considered in the data transmission of Ethernet. The remote management platform is an intelligent management platform for scientific research instruments and equipment. Its function is mainly to realize the data processing and simulation of the state information level system. The key technologies needed for implementation will be expanded in the following chapters.
4 Key Technologies 4.1
Research on a Digital Model of Instruments and Equipment Management Based on Internet of Things Technology
Each department has more and more instruments and equipment, and the equipment is becoming more and more precise, comprehensive, and complex. The asset information, function information, use record, and other relevant storage information are also becoming more and more huge. To realize the full life cycle management of instrument and equipment, on the one hand, it is necessary to analyze the links and key processes of instrument and equipment management in combination with Internet of things technology, on the other hand, it is necessary to further optimize the storage information model. Based on the Internet of things technology, the system supports the high-level requirements of equipment positioning, utilization status, and real-time sharing, complements the shortcomings of the traditional equipment data model, analyzes and models the functions of the equipment in text, and establishes different
Construction of Intelligent Management Platform for Scientific Research Instruments
1689
combination modes of testing theme and equipment, to recommend the best combination of testing and idle equipment for the best effect. Based on this, the digital model of instrument management is based on the Internet of things technology is established. 4.2
Research on Regional Management of Indoor Positioning System of Instruments and Equipment
Due to the separation of the production management system and instrument equipment, administrators can not accurately and timely master the specific use position and use the situation of the lent instrument in the process of use; also, some passive cables and accessories are often not found. In the management process, at present, the administrator can only check the passive cable and accessories in the routine inspection process of the laboratory. The passive cable and accessories can only be found in each laboratory by the borrowing personnel and management personnel. The efficiency is not high, and the situation that the instrument cannot be found often occurs, which has a great impact on the management of the instrument and meter. We can use Bluetooth technology, virtual device technology, and maximum likelihood algorithm to get the location information of the device more accurately, reduce the location drift caused by multipath, path loss, propagation model, and other factors. Combined with a multisignal particle filter, differential denoising, and other algorithms, the problem of location fluctuation caused by environmental interference is effectively suppressed. 4.3
Research on the Management Mode of Instrument Resource Sharing
Realize the establishment of the laboratory personnel information database and the control of the authority of the relevant personnel, enhance the safety guarantee of the laboratory; enable the laboratory personnel to conveniently and quickly query the information about the laboratory and instruments; establish the large-scale instrument and equipment information database, realize the sharing of instrument resources; realize the networking and information of instrument reservation, and only the users who have made the reservation successfully can enter the laboratory to use the instrument to build the “iron bastions” of the laboratory; realize the connection between the reservation use of instruments and funds, promote each user to cherish the use time of instruments more, improve the use efficiency of instruments, and promote the fairness of instrument reservation; realize the real-time record and query of access control and the use of instruments, and realize the statistical analysis of the use of instruments according to various conditions such as personnel, time and instruments. 4.4
Research on the Application of Intelligent Instruments and Equipment Management System
Based on the research of key process analysis and indoor positioning system of instruments and equipment management, the data transmitted from instrument and equipment indoor system module to server is effectively stored and analyzed, and the service of instruments and equipment sensor module real-time response about the realtime collection is realized, and the application related to instruments and equipment
1690
D. An et al.
management is further constructed. In the aspects of the inventory list, analysis of utilization status, general positioning, exploration, and inquiry of instruments and equipment, the construction of visualization system is carried out, and the research results are transformed into the actual management and application.
5 Conclusion Through the implementation of the system, the systematization of scientific research instruments and equipment management is realized, and the overall situation of the devices is clear at a glance, which is also an important link of information construction and transformation; the utilization rate of equipment is improved, the real-time state perception of the use of instruments is realized, and the asset structure is optimized; the cost of equipment management is reduced, which can be combined with the Internet of things, cloud computing and other advanced technologies, and as the main auxiliary tool of instruments management, this platform can greatly save human and financial resources; the utilization rate of funds is optimized, through sorting out and optimizing the instruments and equipment management work, the resources can be used more efficiently, at the same time, it provides an important basis for future procurement decision-making, and reduces the repeated purchase rate.
References 1. Li, ChangLing, Hui, Xu.: Design and implementation of instrument management system[J]. Technical Supervision in Water Resources 023(001), 15–18 (2015) 2. Deqiang, Qu., Liao, Y., Zhang, H.: Application of the Internet of Things in Universities Instruments Management. Res. Explor. Lab. 35, 302 (2016)
The Remote Automatic Value Transfer System of Intelligent Electrical Energy Meter Verification Device Lifang Zhang(&), Jiang Zhang, Luwei Bai, Zhao Jin, and Qi Zhang Inner Mongolia Electric Power Science and Research Institute, Hohhot, China [email protected], [email protected], [email protected], [email protected], [email protected]
Abstract. Because of the characteristics that the verification device of intelligent electrical energy meter needs regular on-site verification according to the requirements of national verification regulations, we propose a remote automatic value transfer technology for the verification device of intelligent electrical energy meter, to establish and improve the intelligent value transfer traceability system and data analysis means, to reduce the labor cost and effectively improve the working efficiency and the management level of verification device. Keywords: Intelligent energy meter verification device transfer
Remote Value
1 Introduction The intelligent electricity meter verification device is responsible for the verification of the customers’ application, acceptance, full performance test, sporadic verification, partial gateway energy meter verification in the power company system. According to the requirements of national legal metrological verification institution and national metrological verification regulation JJG597-2005 “Verification Equipment for AC Electrical Energy Meters”, these devices must be subject to compulsory verification by superior verification institution at least once every two years. The main problems are as follows: 1. Due to the different time of purchasing new standard devices and verification for each metrological verification institution according to its own development needs, the superior verification institution needs to invest a lot of time and manpower to go to each laboratory to verify the verification devices of electrical energy meters every year, with poor working efficiency and economy; 2. Currently, the verification of standard device is still based on the traditional manual on-site verification methods: there are inconsistencies in “verification scheme, verification operation and verification report”, which makes it difficult to effectively guarantee the quality of verification work and the safety of personnel and equipment [1, 2]; 3. The verification data does not form a clear and reliable retrieval and preservation system, which is not compatible with the increased management requirements of the legally authorized verification mechanism by the government regulatory department; 4. The verification historical data “isolated © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1691–1697, 2021. https://doi.org/10.1007/978-981-33-4572-0_249
1692
L. Zhang et al.
and scattered”, and form a “data island”, it is difficult for technical personnel to mine and apply the standard device error and various electrical measurement parameter data, resulting in a waste of data resources and failure to achieve effective monitoring of the health status of the standard device. In this paper, a remote automatic value transfer technology of intelligent electrical energy meter verification device is put forward to establish and improve the intelligent value transfer traceability system and data analysis means, to ensure the effective control of the verification quality and the health condition of the equipment of the internal standard device of the power company system, to ensure the normal operation of the measurement system and the accurate and reliable transmission of the measurement value.
2 The Principle of Remote Automatic Value Transfer System of Intelligent Electrical Energy Meter Verification Device
Fig. 1. Schematic diagram of remote automatic value transfer system of intelligent electrical energy meter verification device
As shown in Fig. 1, the anti-vibration three-phase standard energy meter, as the standard device in the verification process, is used together with the GPS positioning flow intelligent meter box to be transported to the working place of the inspected
The Remote Automatic Value Transfer System of Intelligent Electrical Energy
1693
energy meter verification device in the form of logistics. The meter box will be opened with the dynamic password after the authorization place is confirmed, and the box will be automatically locked in each unauthorized place. The multifunctional standard device controller will be installed on the site of the tested electrical energy meter verification device, between the tested electrical energy meter verification device and its control host, with built-in communication protocol, which can shake hands with each laboratory electric energy meter verification device for communication and control, with takeover mode, assistance mode and silence mode, to facilitate the management and operation of superior verification institution. The remote control and cloud operation platform establish a real-time connection with the standard device and the tested electric energy meter verification device through the power company’s system intranet or civil 3G/4G and future 5G wireless network to realize the remote operation and verification. All data are uploaded to the cloud. The local end can access real-time and historical data through the encrypted interface at any time, and download, view or edit according to the scope of authority. It can maximize the liberation of people from the on-site work. Operators only need to complete their technical professional operations, and all other transmission and analysis work is automatically completed by the system, which greatly improves the work efficiency.
3 Composition of Remote Automatic Value Transfer System of Intelligent Electrical Energy Meter Verification Device 3.1
Remote Control and Cloud Operation Platform
Based on the existing company’s system intranet and 3G/4G mobile communication network technology, a remote control and cloud operation platform is set up. According to the requirements of JJG597, database systems, sharing systems and remote control software is compiled. The remote control and cloud operation platform are deployed in the superior verification institute. It needs to adapt to the communication protocol of different controlled equipment. The equipment control interface of the software system must be reproducible, open, and lose coupling, which is mainly composed of the following four parts. 3.1.1 Cloud Database System The data center of the system will automatically store the verification process and results prepared according to JJG597 in the cloud database. 3.1.2 Visual Assistant System It will establish a connection with the positioning module of the GPS positioning flow intelligent meter box and the anti-vibration three-phase standard energy meter, and display the position of the standard in real-time, and mark the authorized location, and generate the dynamic unpacking password.
1694
L. Zhang et al.
3.1.3 Data Analysis System It can obtain all the current and historical test data of the tested watt-hour meter verification device in the cloud database system, and perform data comparison and analysis. After completion, it can generate a verification certificate with one click. 3.1.4 Verification Software System It can connect the standard device and multifunctional standard device controller remotely, call the verification process in the cloud database system to carry out the verification work and transfer the returned verification result data to the cloud database system. 3.2
Multifunctional Standard Device Controller
The controller of the multifunctional standard device is the key equipment of the remote automatic value transfer system. Its main functions are as follows: a) Carry out integrated control and hardware verification for the verification device of single and three-phase electrical energy meters. According to the communication protocol of various verification devices, the control program is developed. Based on the basic network and serial port communication hardware, the platform control instructions are forwarded to intervene in the verification devices of single and three-phase energy meters provided by various manufacturers. b) Ensure that the system can monitor the physical examination process and collect the verification data and fault information. The controller can be set into three different modes according to the requirements: a) Takeover mode: this mode is used when the staff of the superior verification institution needs to conduct verification. The staff sends instructions to the multifunctional standard device controller through the remote control and cloud operation platform. The multifunctional standard device controller completely controls the tested device and the anti-vibration three-phase standard electrical energy meter (standard device) and controls the tested device’s output according to the preset process. The error data of the inspection device obtained by the standard device are also sent to the remote control and cloud operation platform in real-time through the multifunctional standard device controller. The staff can obtain all the data in real-time through the local computer, which is the same as the effect of their own on the inspection site. All data are also recorded on the remote control and cloud operation platform in real-time so that the original data record and verification certificate can be directly generated according to the template after verification. (In this mode, the control host of the tested device is completely in the viewing state, and no operation can be carried out.). b) Assistance mode: when the verification personnel is carrying out the daily verification work of the electrical energy meter, if they are unable to judge the abnormal state and need the guidance and troubleshooting from the professional personnel of the superior verification organization, they need to use this mode (in this mode, the anti-vibration three-phase standard electrical energy meter is not needed.). The host
The Remote Automatic Value Transfer System of Intelligent Electrical Energy
1695
configured for the verification device has full control authority, and the professional personnel through the remote control and cloud operation platform can obtain the viewing right and make half take over when necessary, which is convenient for debugging and problem-solving. c) Silent mode: the controller automatically runs the corresponding program, periodically record and upload the status, parameters and daily verification data of the verification device to the remote control and cloud operation platform, and judge whether the current device status is normal according to the set threshold conditions. If it is abnormal, the controller can push the corresponding information to the professional of the superior verification institution, and the professional decides whether to intervene. In this mode, the host configured by the device has full control authority. 3.3
GPS Positioning Flow Intelligent Meter Box
The anti-vibration three-phase standard electrical energy meter and the transport box are equipped with a GPS positioning module. The meter box is powered by a special hidden battery pack. During the whole flow process, the GPS positioning signal is sent out all the time, and the standard meter will automatically send out GPS positioning signals after power on. The visual assistant system in the remote control and cloud operation platform calibrates the authorized place in each workplace, receives GPS positioning signal, generates a dynamic unlocking password. The box will be open with the dynamic password in the authorized place and automatically be locked in an unauthorized place. Forced opening of the box will make the system automatically send out warning information, which improves the safety of the equipment and the control of the whole verification process. 3.4
Anti-vibration Three-Phase Standard Energy Meter
As the standard device of the whole verification process, to ensure its universal applicability, under the condition of ensuring its accuracy of 0.01 level, it can improve its anti-shock and anti-vibration level to the maximum extent, to ensure its safety and reliability in the violent environment of existing logistics transportation. In the process of data transmission, it is connected with the multifunctional standard device controller and controlled by the remote operator. The difference from the conventional standard energy meter is that it has high accuracy, high reliability, high stability, high vibration resistance, and high intelligence. 3.5
On-Site Intelligent Connection Module
At the site, one end of the intelligent connection module is made into a corresponding integrated composite plug according to the wiring panel of the anti-vibration threephase standard energy meter, and the other end is made into a standardized terminal block according to the size of the type and specification of the energy meter, which can be directly inserted into the meter frame of the verification device of the energy meter
1696
L. Zhang et al.
according to the form of the inspected energy meter. Therefore, the terminals on both sides can be connected quickly and accurately, eliminate the risk of wrong connection, ensure the safety of equipment and personnel to the maximum extent, and improve the working efficiency, and further promote the generation of the applicable standard of the overall plug-in interface for measurement and test.
4 The Significance of Remote Automatic Value Transfer System of Intelligent Electrical Energy Meter Verification Device The remote automatic verification technology establishes and improves the intelligent value traceability system and data analysis means to ensure the effective control of the verification quality of the in-service verification device and the health status of the equipment, to ensure the normal operation of the measurement system and the accurate and reliable transmission of the value. It can directly reduce the labor cost of the verification, reduce the work intensity of the verification staff, improve the work efficiency, and effectively improve the technical management level of the intelligent energy meter verification device2. a) Integrated control. The control and management of the verification device need not be adjusted according to the different conditions of each manufacturer’s equipment, but for the multi-functional standard device controller to send instructions, and the controller docking different verification devices, which is conducive to the formulation of the unified device control protocol. b) Consistent management. The after-sales management of the verification device relies on the multifunctional standard device controller, which can be hardwareindependent, real-time, and online. The hardware of different manufacturers and all test data can be easily accessed into the whole working system and stored in the cloud server in a unified format, which is convenient for statistical management and provides the basis for later development and application. c) One network expansion. The function expansion is convenient. It can bring all the pressure testing platform, terminal testing platform, and DC platform into the management system to form a unified centralized management network. d) One-stop monitoring. The real-time monitoring of the working state of the device and the statistics, query, and analysis of the error data of the device can realize the centralized management and one-stop monitoring of the standard device and form the closed-loop management of the metrological standard device.
5 Conclusion The establishment of a standardized, scientific, and unified management mode of intelligent electrical energy meter verification device can guarantee the verification work quality to the greatest extent and eliminate the measurement error caused by the abnormality of the verification device. The development and application of an on-site
The Remote Automatic Value Transfer System of Intelligent Electrical Energy
1697
intelligent connection module can eliminate potential personnel and equipment safety hazards caused by on-site operation errors. The traceability of historical data and operation records is to provide the basis for the reduction of metrological disputes. The whole system provides technical support for further winning the support and trust of the government metrological management department and the majority of power customers.
References 1. Lai, L., Tan, R.: Analysis of the current situation and prospect of the verification of electric energy meters. Ind. Meas. S1, 35–36 (2012) 2. He, H.: The research and application of the field automatic verification system of the electric energy meter verification device. Sci. Wealth 26, 72 (2017)
Application of Weighted Fuzzy Clustering Algorithm in Urban Economics Development Xi Wang(&) Qianhai Institute for Innovative Research, Shenzhen University, Shenzhen 518000, China [email protected]
Abstract. The comprehensive economic strength reflects the economic development level of a city. In this paper, the weighted fuzzy clustering algorithm is used to evaluate the comprehensive economic strength of a city, and the algorithm of index weight is improved to avoid the problem that different attributes contribute no difference to the classification. Combined with the weight and fuzzy clustering algorithm, the analysis and evaluation scheme uses the weighted fuzzy clustering algorithm to evaluate the comprehensive economic strength of each city in a province. The results show that it is effective to use the weighted fuzzy clustering algorithm to evaluate the comprehensive economic strength of a city. Keywords: Weight
Fuzzy cluster analysis Urban economic development
1 Introduction The comprehensive economic strength of a city refers to all the strength and potential of a city, as well as its economic status and influence at home and abroad. The evaluation of a city’s comprehensive economic strength is conducive to the government’s formulation of reasonable economic development policies [1]. Therefore, it is very important to build a scientific evaluation model and accurately evaluate the comprehensive economic strength of a city [2]. Clustering analysis is an important tool of data mining, which is widely used in pattern recognition, image processing, logistics distribution, and financial competitiveness evaluation. In the process of traditional clustering analysis, each data is accurately divided into a certain type, and similar individuals have a similarity. The individuals of different classes are different. However, in real life, the boundary between many things is not clear, and it is not an eitheror relationship. 50. Professor A. Zadeh put forward the concept of a fuzzy set in 1965. The method of fuzzy cluster analysis came into being and was applied in various fields. Scholars have studied the evaluation method of urban comprehensive economic strength. And the empirical analysis is carried out. Wang Jiangang and Yu Yingchuan applied the method of principal component analysis to analyze the comprehensive economic strength of cities: Mei Yan and others applied TOPSIS to evaluate the comprehensive economic strength of 13 cities in Jiangsu Province: Li Feng Gao constructed the evaluation index system of the comprehensive economic strength of cities. Factor analysis and cluster analysis were used to evaluate and classify the © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1698–1702, 2021. https://doi.org/10.1007/978-981-33-4572-0_250
Application of Weighted Fuzzy Clustering Algorithm
1699
comprehensive economic strength of major cities in Hubei Province. Because the city’s economic competitiveness is affected by various factors. These factors have the characteristics of fuzziness and uncertainty, so the city’s comprehensive strength also has fuzziness. In this paper, the fuzzy clustering method is used to classify and analyze the comprehensive economic strength of each city in a province.
2 Weighted Fuzzy Clustering Algorithm 2.1
Determining Index Weight Based on Grey Relation Theory
To make the evaluation results accurately reflect the real situation and fully reflect the role of each factor in the evaluation system, this paper uses the subjective and objective method to determine the weight. This method uses the idea of a grey close correlation degree for reference to avoid the problem that the index weight is easily affected by the value of the resolution coefficient. The calculation method is simple and the weight value can reflect the actual situation better. There are six steps to determining the weight value. (1) Construct a multi-objective decision matrix. There are m decision schemes and N decision attributes. The expert’s evaluation value xi;j of the jth attribute of the i-th objective is the initial judgment matrix of each attribute V ¼ ðxi;j Þmn . (2) The decision matrix is normalized. To eliminate the dimension and scale differences of attribute indexes. Each attribute index is normalized. To avoid the use of a complex grey correlation algorithm, this paper adopts a range method to deal with dimensional problems. The judgment matrix after normalization is 0 V ¼ ðxi;j Þmn . The results of the standardized transformation of benefit index and cost index are as follows:
0
xi;j ¼
8 xi;j minðxi;j Þ > i > > ; Benefit index > > < maxðxi;j Þ minðxi;j Þ > > > > > :
i
i
maxðxi;j Þ xi;j i
maxðxi;j Þ minðxi;j Þ i
ð1Þ
; Cost index
i
(3) Determine the reference sequence. The index value vector corresponding to the factors that have the greatest impact on the evaluation scheme is selected as the “public” reference weight vector. The reference data column x0 is composed, and the index vector corresponding to other indexes is recorded as Xi; where: 0
0
0
0
0
0
X0 ¼ ðx0 ð1Þ ; x0 ð2Þ ; . . .; x0 ðmÞ ÞT ; Xi ¼ ðxi ð1Þ ; xi ð2Þ ; . . .; xi ðmÞ ÞT ; i ¼ 1; 2; . . .; n
1700
X. Wang
(4) Find the distance between each index sequence and reference data sequence. The calculation formula of the distance between the index series and reference data series: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi m X doi ¼ ðx0 ðkÞ xi ðkÞÞ2
ð2Þ
k¼1
(5) Determine the weight of each indicator. The calculation method of each index weight: xi ¼
1 ; i ¼ 1; 2; . . .; n 1 þ doi
ð3Þ
(6) Normalized index weight. The calculation formula of the normalized index weight is as follows: xi xi ¼ P n xi
ð4Þ
i¼1
2.2
Establish Weighted Fuzzy Clustering Model
Fuzzy clustering analysis is a method of quantitative classification of things from the perspective of fuzzy sets. It establishes a fuzzy decision matrix according to the attributes of the research object itself and then determines the clustering relationship between things according to the degree of membership. In the process of fuzzy clustering, the contribution of different default attributes to classification is the same, which often leads to inaccurate clustering results. To overcome this disadvantage, this paper uses the weighted fuzzy clustering method, the specific process is as follows. ~ (1) Calculation of the weighted decision matrix Z. (2) The fuzzy similarity matrix is established. According to the traditional clustering method, the similarity coefficient is determined, the matrix composed of similar coefficients is called fuzzy similar coefficient matrix ðri;j Þmn . There are many methods to calculate the similarity coefficient, such as the index similarity coefficient method, angle cosine method, quantity product method, maximumminimum value method, and so on. For the convenience of calculation, this paper uses the method of maximum value and minimum value. The specific calculation formula is as follows: n P
ri;j ¼
k¼1 n P k¼1
minf~fik ; ~fjk g maxf~fik ; ~fjk g
; i; j ¼ 1; 2; . . .; m
ð5Þ
Application of Weighted Fuzzy Clustering Algorithm
1701
(3) The fuzzy equivalent matrix is established. Since the fuzzy similarity matrix obtained from formula (5) can only satisfy reflexivity and symmetry, but not transitivity, it is not necessarily a fuzzy equivalence relation. Therefore, it is necessary to construct a fuzzy equivalence relation from the fuzzy similarity matrix. The best algorithm to find the transitive closure of fuzzy equivalence matrix is the square method. (4) Dynamic fuzzy clustering. According to the fuzzy equivalent matrix, different values of k 2 [0,1] are given respect, and different classification relations are obtained by calculating the matrix, then the dynamic clustering results are given.
3 Example Analysis The evaluation of urban comprehensive economic strength is one of the hot topics in the economic field. Many scholars have studied this issue and produced some valuable research results. Based on the previous research results and the connotation of the city’s comprehensive economic strength, this paper selects eight indexes and constructs the evaluation index system of the city’s comprehensive economic strength, following the principles of scientificity, rationality, simplicity, and operability. That is GDP (x1), the proportion of tertiary industry in GDP (x2), per capita GDP (x3), GDP growth rate (x4), industrial growth rate (x5), total retail sales of social consumer goods (x6), fixed asset investment (x7), total import and export (x8). Taking 18 major cities of a province as samples, according to the 2018 statistical yearbook of a province and the statistical yearbook of each region of a province, the data and index weights of each city after dimensionless processing are obtained (Table 1). Table 1. Dynamic clustering results k 0.54 0.57 0.61 0.66 0.68 0.75 0.81 0.85
Dynamic clustering results {4}{9}{18} {1}{4}{9}{18} {1}{4}{6}{9}{18} {1}{4}{6}{9}{16}{18} {2, 3, 13, 15, 16, 17}, {5, 7, 10}, {11, 12} {2, 15, 16, 17}, {5, 10} {2, 15, 17} {15, 17}
In terms of strategic elements, although the comprehensive economic strength of the central region is relatively strong. Rich in resources. Superior location. However, according to the results of dynamic clustering, most cities in a province are in one group, which shows that the comprehensive economic level of these areas is quite different, while a and b are always in one group, which shows that the comprehensive economic strength of the two places is not much different. Depending on convenient
1702
X. Wang
traffic conditions, City C has developed rapidly in recent years, becoming the core city of Central Plains and driving the development of surrounding cities. The cities in the north of a province are better than those in the south. A is adjacent to B. all of them are underdeveloped cities. Their industries are relatively backward, their economic foundation is weak, and their comprehensive economic strength is relatively close. In the clustering results, D, e, and F are also clustered, which is in line with the actual situation. These three cities are the economic backbone of a province. D and E are close to provincial capital C, with convenient transportation. F is located in the north of Henan Province, convenient transportation, is an important industrial production base in a province. The comprehensive economic strength of the three cities is relatively close.
4 Conclusion In this paper, a weighted fuzzy clustering algorithm is presented, and a method to calculate the weight is given based on the grey theory, which avoids the problem that the weight of the index is easily affected by the value of the resolution coefficient. The clustering weighted fuzzy algorithm is applied to evaluate the comprehensive economic strength of a province, which provides a reference for the relevant government departments to formulate development strategies.
References 1. Lou, Y., Cheng, M., Gaijin, W., et al.: Image fuzzy clustering analysis based on FCM and genetic. Comput. Eng. Appl. 46(35), 173–176 (2010) 2. Gao, X., Gu, S., Bai, C., et al.: Customer clustering algorithm considering the structure of logistics distribution network and the constraints of distribution volume. Syst. Eng. Theory Practice 32(1), 173–181 (2012)
The Basic Education of Tourism Specialty Based on Computer Network ShuJing Xu(&) Hulunbeier University, Hailar 021018, China [email protected]
Abstract. With the continuous development of the tourism industry, tourism education has been paid more and more attention. Most of the courses in tourism majors are practical. In the teaching process, to improve the teaching level, we need to make full use of modern education technology to improve the practical ability and comprehensive quality of students. This paper analyzes and discusses the application strategies of modern educational technology in the process of tourism education, aiming to improve the level of tourism education in Colleges and universities. Keywords: Tourism education strategy
Modern education technology Teaching
1 Introduction Since the beginning of the last century, computer technology has been widely used in people’s life. With the continuous extension of computer technology, many other new technologies have emerged, such as multimedia technology, new media technology, etc. the application of computer technology in various fields has become more and more extensive, which has greatly changed people’s life. Modern educational technology is a kind of educational technology developed based on computer technology, which integrates elements such as pictures, videos, and words. It is the specific application of various modern technologies in the field of education. The role of modern educational technology is very obvious, the most intuitive advantage is that it can bring people a very good audio-visual experience. With the continuous popularization of modern educational technology, the innovation and reform of traditional education also began to pay attention to the field of modern educational technology. In the era of information technology, we should strengthen the application of these new technologies and new media, make the teaching process more convenient, interesting, and vivid, expand the traditional teaching mode, improve students’ learning level of various theoretical knowledge, strengthen practical education, and improve the professional level of tourism professionals [1].
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1703–1708, 2021. https://doi.org/10.1007/978-981-33-4572-0_251
1704
S. Xu
2 Overview of Modern Educational Technology Modern educational technology refers to a teaching method that uses modern educational theory and modern scientific and technological achievements to design various teaching resources, organize the teaching process, and optimize the teaching effect. It includes computer technology, digital audio-visual technology, network technology, remote communication technology, artificial intelligence technology, etc. through the improvement of traditional teaching mode, modern education, and quality education are realized. In modern education technology, the integration of various teaching resources such as voice, text, picture, and video is realized, which makes the source of knowledge and information more abundant, and the capacity of teaching resources is larger [2, 3], and the teaching process is more vivid, which can improve students’ enthusiasm and interest in the learning process. The application of modern educational technology in teaching mainly includes four parts: First, the basic equipment of the classroom, including various infrastructure equipment in the teaching process, such as a digital projector, computer, sound amplification facilities, etc. Second, the basic equipment of the lecture room, including the host and related software, camera, and control software. Third, the basic equipment of the central control room, including network multimedia classroom, control software, and hardware, network teaching resources, etc. Fourth, digital networks, mainly to achieve the dissemination of various teaching resources, can realize the connection with other communication networks, facilitate the timely download of various network learning resources in the teaching process, and enrich the teaching materials.
3 Current Teaching Situation of Tourism Tourism major is an important major to train tourism management service talents. In recent years, it has a higher position in the process of college education, and more and more students are majoring in tourism. Strengthening practical education is a basic goal in the process of tourism professional education. According to the needs of talents in the tourism industry, we should design specific teaching objectives and teaching programs guided by professional needs, to constantly improve the teaching level. The current situation of tourism teaching is not optimistic, there are still some problems as shown in Fig. 1. For example, the old teaching concept, the insufficient application of various new technologies, and the relatively single teaching materials of basic courses of tourism major cause many students to lose interest in basic courses of tourism major in the learning process. For example, the strength of practical education is not enough. Practical education is an effective way to improve teaching efficiency. In the process of basic curriculum education of tourism major, we should attach importance to practical education and apply various modern education technologies, to continuously improve students’ enthusiasm for learning. At present, in the process of tourism education in many colleges and universities, the application of various new technologies is insufficient, the level of teachers’ comprehensive ability needs to be improved, and the
The Basic Education of Tourism Specialty Based on Computer Network
1705
students’ cognition of tourism is biased, which ultimately leads to the students’ learning level can not be improved [4].
Fig. 1. Current problems in Tourism Teaching
4 The Application of Modern Educational Technology in the Process of College Tourism Teaching (1) Innovative teaching concept. In the process of tourism education in Colleges and universities, the outdated teaching concept is an important factor affecting the teaching quality. To improve the teaching quality of tourism in Colleges and universities and strengthen the integration of modern education technology and tourism education, we must create a strong atmosphere in Colleges and universities to promote the application of modern education technology. For example, to carry out multimedia teaching publicity, with the help of information platform to publicize the various advantages of multimedia teaching, the teaching and research group of the school should often organize teaching staff to discuss the related problems of multimedia teaching, so that teachers can strengthen the cognition of multimedia teaching and practice in the teaching process. (2) Strengthen the application of modern educational technology and innovate teaching mode. (3) Using modern educational technology to activate the classroom atmosphere. The application of modern education technology can materialize all kinds of complex and abstract knowledge, thus enriching students’ thinking and cognition and stimulating students’ desire for learning. In the process of teaching, we should strengthen the application of multimedia resources, with the help of a multimedia image, voice, video, and other functions, display the subject knowledge, and make
1706
S. Xu
use of the combination of modern education technology and teaching content, so that students can better understand the knowledge and conduct an in-depth analysis of various knowledge points. (1) Use multimedia courseware for teaching. In the application process of modern education technology, first of all, we need to innovate the teaching resources. The application of multimedia technology makes the teaching materials resources in the process of education and teaching more and more abundant. At present, the most common teaching materials in the process of education are multimedia courseware, which includes many types such as pictures, text, music, video, etc. compared with the traditional text resources, Multimedia courseware can activate the classroom atmosphere. For example, in the process of tourism professional education, relevant data such as the development trend of the tourism industry and the current situation of the tourism industry can be displayed in the form of charts, so that students have a more intuitive and clear understanding of the development trend of China’s tourism industry in recent years, to change their understanding of the traditional tourism industry. (2) Using the teaching situation to strengthen experience teaching. In the modern education concept, strengthening practical education is the development direction of education and teaching. In the process of teaching, setting up teaching scenarios can help students to deepen their understanding of various knowledge, and teaching scenarios can bring a better experience to students. In the process of education, teachers should create multimedia teaching scenarios according to different teaching links and students’ psychological thinking characteristics, Stimulate students’ interest in learning. For example, in the teaching process, students can be taught by playing short films, guiding students to discuss, discussing, and evaluating. In the discussion process, students can combine the actual cases in the video to analyze the specific problems. Also, they can take one example and three examples in the discussion process to solve some problems in the tourism industry. For example, in the process of tour guide service course explanation, to improve the serviceability of students and let students learn the skills of tour guide explanation, we can create a real atmosphere for students through modern education technology in the multimedia teachers, through the audio-visual projection function, create a real scenic spot scene, and then let students as tour guides, other students as tourists, simulate tourism scenes, Through on-site explanation, students can find their problems in the process of tourism service and solve these problems to lay a solid foundation for future work. 3) Strengthen the strength of network teaching. In the process of modern education, the application of modern education technology is also reflected in the change of teaching forms. Traditional education and teaching are generally completed in the classroom, which has the limitations of time and space. In the process of modern education, traditional courses can be moved to the network, giving students more space to study freely. Students can learn relevant courses on the network according to their own needs, complete the class hours, and also get more information from the network, which is helpful for students’ development and learning. Network education mode is a way of knowledge dissemination. Through the
The Basic Education of Tourism Specialty Based on Computer Network
1707
network, students can be provided with high-quality personalized courses and learning services. Schools should fully design the network courses in combination with the teaching objectives of tourism majors so that students can get more interesting and useful knowledge on the network, and expand the traditional education model and way. (3) Enrich teaching resources. In the process of education and teaching, the enrichment and expansion of teaching materials is the key to improve the quality of education and teaching. At present, the requirements of tourism education in Colleges and universities are higher and higher. The key is to cultivate more management and practical talents and make more contributions to the construction of the tourism economy. At present, the teaching mode of “theory explanation oriented” is still adopted in the teaching process of tourism major in Colleges and universities, which directly affects students’ enthusiasm for learning, leads to students’ disgust in the learning process, and hinders the reform of basic courses of tourism major. In the process of the continuous application of modern education technology, we should strengthen the enrichment and improvement of teaching resources, share more interesting learning materials for students with the help of the network, so that students can have more opportunities for independent learning. For example, teachers can combine the course content to design electronic courseware, edit exercises after class into electronic document format and pass it to students, so that students can learn according to their actual situation in the learning process. In the process of teaching material construction, we should also combine the network to expand the students’ vision, so that students can strengthen the understanding of tourism, teachers can also strengthen the experience of some foreign tourism professional education, expand the tourism professional teaching materials.
5 Conclusion To sum up, with the continuous application of computer technology and network technology, people’s work, life, and learning methods are constantly changing, and the rapid development of computer information technology has gradually penetrated the field of education and teaching, making the educational concept and mode constantly updated. Under the current background of the times, it is necessary to change the cognition of college teachers and students on modern education technology, change the traditional teaching concept of tourism specialty, and constantly practice the concept of quality education in the teaching process, improve the teaching mode, and combine modern education technology and various media platforms to make classroom education more vivid and interesting, At the same time, we should strengthen the application of other computer technology and Internet technology to create more network courses for students, so that students can understand the development trend of the modern tourism industry, and enrich students’ learning materials through computer information technology, to realize network education and modern education.
1708
S. Xu
References 1. Ling, D.: On the application of modern information technology in Tourism Teaching. Science and technology information (2012). (27) 2. Wu, X.: Research on experimental tourism teaching mode based on modern education technology. Scientific Chinese (2016). (02) 3. Heng, Z.: Application of multimedia in Tourism Teaching. Liaoning Normal University (2013) 4. Lihong, K.:. Application of multimedia teaching in Tourism Teaching. Modern education science: middle school teachers (2012). (09)
The Evaluation of Tourism Resources Informatization Development Level Based on BP Neural Network Ming Xiang(&) Chengdu Polytechnic, Chengdu 610041, China [email protected]
Abstract. The evaluation of the development level of tourism resources informatization is a comprehensive evaluation and test of the performance of an MIS system. A good evaluation can promote the realization of the system design objectives and improve investment efficiency. In this paper, a method of using a BP neural network to evaluate is proposed, and AHP and fuzzy evaluation methods are used to evaluate the samples of neural networks. This method makes full use of the fuzzy mathematics theory and neural network method and provides a feasible way for the evaluation of the management information system. Keywords: Tourism resources informatization Development evaluation
BP neural network
1 Introduction Tourism is an information-intensive industry [1]. In this sense, informatization is the core of tourism. Therefore, the collection, collation, processing, and transmission of information by tourism enterprises are the top priority. Relevant research shows that strengthening the construction of tourism informatization is of great significance to the healthy and rapid development of China’s tourism industry. Hainan is building an international tourism island, but the level of informatization of tourism resources in Hainan Province is low, and the information infrastructure is poor. The phenomenon of information island and information separation of tourism resources is serious, and the main managers of tourism enterprises have insufficient understanding of the informatization of tourism resources. In the current big data environment, how to collect and manage tourism resource information more efficiently and accurately is a big problem for relevant government departments and tourism enterprises. Based on the investigation of Hainan tourism resources informatization, this paper analyzes the shortcomings of Hainan tourism resources informatization, puts forward corresponding improvement measures and suggestions, and strives to provide certain guidance and help for the construction of tourism resources informatization.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1709–1713, 2021. https://doi.org/10.1007/978-981-33-4572-0_252
1710
M. Xiang
2 Tourism and Information Technology in Hainan Province With the development of the social economy, the tourism industry has entered the era of popularization, consumers are constantly looking for personalized consumption platform, tourism has become a part of people’s daily life. According to the current situation of tourism in Hainan Province in recent years, Hainan has become a resort for tourists with its excellent geographical advantages and ecological environment. The tourism products of Hainan Province are developing towards innovation and refinement. However, the demand of tourists for tourism resources can not be properly solved, such as changing tourist routes and obtaining personalized tourist attractions. With the development of modern information technology, the realization of tourism resources informatization has become an important measure to solve this problem. In 2014, the report “Hainan tourism enterprises strengthen the marketing of network new media” shows that the awareness of informatization construction in various regions of Hainan Province is unbalanced. In terms of the awareness of regional informatization construction, the survey results show that: due to the historical reasons of the layout of the tourism industry chain in the whole province, there is an inevitable difference in the starting time of tourism between different regions, resulting in the gap between the development stage of the tourism industry and the level of Tourism informatization in various regions. There are different differences in the development of tourism double information among different regions of Hainan Province, and the consciousness of different regions is also different. But the report also shows that Hainan Province has a relatively high awareness of tourism informatization, nearly 80%, and the dependence coefficient of tourism informatization in all regions of the province is more than 70%. This degree of cognition has laid a solid foundation for the construction of tourism informatization in the whole province. However, due to the incomplete publication of tourism resources information published by the Hainan Tourism Bureau, library, tourism industry research institutions, and enterprise websites, the phenomenon of information isolated island exists, which reflects that Hainan’s information infrastructure needs to continue to improve, and also needs to continue to strengthen the awareness of tourism enterprise managers on tourism resource informatization, especially the lack of awareness of leaders at all levels This also makes the development direction and strength of tourism resources informatization biased. At the same time, in this era of information explosion, as the core part of tourism information construction, how to effectively and accurately collect and manage tourism resource information has become the heart of the relevant departments and tourism enterprises [2].
3 Establishment of Evaluation Index System To evaluate an MIS, it is necessary to establish an evaluation index system, that is, to determine which aspects to evaluate the advantages and disadvantages of the MIS system. To establish the evaluation index of the MIS system, we should consider the technology application, customer demand, economic benefit, and environment. After a lot of investigation and research, the author analyzes the research results of other scholars on the evaluation index system of the MIS system and puts forward the
The Evaluation of Tourism Resources Informatization Development
1711
evaluation of the MIS system from four aspects: system technical level, system performance, external influence, and management evaluation.
4 Design of BP Network Method for MIS System Evaluation After constructing the evaluation index system of MIS, it is necessary to select the appropriate evaluation method to evaluate it. The existing evaluation methods, such as the AHP method and fuzzy comprehensive evaluation method, have their advantages and disadvantages. AHP method has a strong ability to deal with the multi-level index system. The fuzzy comprehensive evaluation method fully considers the fuzziness of many evaluation factors in the evaluation of the MIS system, and its index attribute is difficult to quantify. Therefore, it is necessary to Fuzzify it. The evaluation methods of BP network introduced in this paper also draw lessons from the ideas of these methods in the determination of index weight, the measurement of index attributes, and the initial evaluation of sample data, but what is important in this paper is to use the function of BP neural network in nonlinear function approximation to realize the high unification of evaluation factors and evaluation results of MIS system evaluation, But for the determination of each index weight and the quantitative method of index attribute, this paper does not make a specific introduction [3]. 4.1
Brief Introduction of BPS Network
Neural network technology is a new technology rising in recent years. It is composed of many parallel computing units with simple functions, which are similar to the units of a biological neural system. Although the structure of a single neuron is extremely simple and its function is limited, the network system composed of a large number of neurons can achieve a very rich and colorful behavior. The generation of the BP network is attributed to the acquisition of the BP algorithm. BP algorithm is the most famous training method of multilayer neural network 1) To ensure that the network will not be saturated or abnormal, the initial value is usually set to a small random number. 2) Select the appropriate training samples, input the sample data into the network, and calculate the output value of the network. 3) The deviation between the output value and the expected value of the sample is calculated, and then from the output layer to the input layer, the weights are adjusted in the direction of reducing the deviation. 4) Each group of data in the training sample set is trained until the whole training deviation is acceptable. The trained neural network can accurately represent the relationship between input and output. When a group of inputs is known, the neural network can be used to calculate its output value.
1712
4.2
M. Xiang
BP Network Structure Design
In this paper, the input factors of the BP neural network used in MIS system evaluation are the four upper indexes in the evaluation index system: system technical level, system performance, external influence, and management evaluation. The output factors are the results of the system evaluation. The evaluation results are divided into three levels: high, medium and low, which are represented by vectors (1,0,0), (0,1,0), (0,0,1). Therefore, the input node n = 4, the output node M = 3, and the intermediate node can be taken as I = 3 according to experience. The network structure is shown in Fig. 1.
Fig. 1. BP network structure diagram
4.3
BP Neural Network Rules
The core part of the BP network is the learning rules of the BP network, namely the BP algorithm. BP algorithm is a kind of learning with a tutor. The core idea of the BP algorithm is to transmit the output error layer by layer from the hidden layer to the input layer. There are two ways to train the network with the BP algorithm: one is to modify the weight every time a sample is an input; the other is to process it in batch, that is to input all the samples in a training cycle in turn and calculate the total average error. As shown in Fig. 2.
The Evaluation of Tourism Resources Informatization Development
1713
Fig. 2. BP algorithm rule diagram
5 Conclusions According to the survey data, 100% of the city and county governments in Hainan Province have established official tourism websites. However, there is no special system platform for storing and managing tourism resources information in cities and counties. All of them are saved in the form of document storage, and some of them are stored in the database of the tourism official website; To understand the development level of tourism informatization in Hainan Province, Hainan Provincial Tourism Development Committee spent a lot of manpower and material resources in 2014, with the cooperation of various cities and counties, carried out the tourism informatization research of the whole province, and set up a tourism resource collection platform in sunshine Hainan network to manage the tourism resource information. In the website, tourists can search Hainan’s tourism resources according to the region, and can also search tourism resources according to the types of tourism resources. Acknowledgments. A Study on the Development Model of Ecotourism in the Southeast margin of Qinghai-Tibet Plateau, ‘‘Research Center for Economic, Social and Cultural Development of Qinghai-Tibet Plateau’’ (QZZ1905).
References 1. Bao, J., Chu, Y., Peng, H.: Tourism Geography. Higher Education Press, Beijing (1993) 2. Yang, Z.: Development of Tourism Resources. Sichuan people’s publishing House, Chengdu (1996) 3. Yan, H., Wang, Y.: Research on economic benefit evaluation of enterprise informatization based on bp neural network. Sci. Technol. Entrepreneurship Monthly 18(6), 87–88 (2005)
The Application of Decision Tree Algorithm in Data Resources of Environmental Art Design Specialty in Colleges and Universities Jing Chen(&) College of Art and Design, Shang Qiu Normal University, Shangqiu 476000, China [email protected]
Abstract. With the rapid development of society and the rapid development of high-tech, modern pace drives people to follow the pace of the times. As a new discipline integrating art, landscape, design, psychology, architecture, human engineering, and other disciplines, environmental design major has to improve the teaching level and formulate the teaching mode and educational objectives to meet the social needs. This paper starts with the cognition of the environmental art design specialty of the project teaching method. Data mining provides a new intelligent way for people to understand data. Based on the brief introduction of the concept, mining process, and common methods of data mining technology, this paper discusses the application of decision tree algorithm in the data resources of environmental art design specialty in Colleges and universities, and the positive signs in the practice teaching of environmental design specialty. Furthermore, the specific practical steps of the application of the project teaching method in environmental art design specialty are proposed. Keywords: Decision tree algorithm
Practice teaching Application research
1 Introduction Since the 1950s, with the great development and prosperity of society, the talents of various specialties have played an increasingly important role in the development of the social economy. Therefore, colleges and universities have become the place to cultivate talents for society. To meet the needs of the society, colleges and universities all over the world have actively adjusted their teaching objectives and teaching strategies. It has been approved and applied by the majority of schools. The decision tree algorithm has changed the traditional teaching model in which teachers transfer existing knowledge and skills to students. Instead, the learning process is decomposed into detailed project projects. Under the guidance of teachers, all or groups of students complete independent projects, and students have to collect and process information by themselves, Design the project plan, implement the project plan, and then the teacher guides the students to complete the project, and finally evaluates the results of this project and prepares for the next project. The decision tree algorithm has changed the integrity of the traditional teaching system. Students complete the project independently under the © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1714–1718, 2021. https://doi.org/10.1007/978-981-33-4572-0_253
The Application of Decision Tree Algorithm in Data Resources
1715
guidance of teachers. It focuses on training students’ hands-on ability. It requires students as the main body and focuses on improving students’ practical ability and innovation ability. Finally, it realizes the perfect combination of learning results and practice, improves students’ social experience, and makes learning achievements diversified. Environmental art design professional cognition. An environmental art design major is a new type of major in China. It is a comprehensive major which combines art, landscape, design, psychology, architecture, and ergonomics. Because of its short development time and a wide range of fields, the teaching objectives and teaching mode of the major are under exploration. As a comprehensive discipline, environmental art and design major has its characteristics in addition to the characteristics of the discipline. Foresight: according to the characteristics of the environment to be planned and designed, select appropriate materials, design relevant schemes, and predict the structure of the design scheme; it is systematic: it is a comprehensive interdisciplinary subject, and various disciplines are integrated and infiltrated with each other, which requires designers to know the design field; Creativity: as an art and design discipline, innovation is the fundamental driving force for its existence and development [1].
2 The Positive Significance of Decision Tree Algorithm in Practice Teaching of Environmental Art Design Specialty The decision tree algorithm is conducive to the improvement of teaching effect and efficiency. The traditional teaching model is that teachers teach their knowledge to the students in the form of class teaching. This teaching model is suitable for cultural and theoretical knowledge. However, the environmental art design major is a highly practical discipline, which requires the cultivation of talents with practical ability. Therefore, the traditional teaching can not meet the requirements of this kind of discipline. The decision tree algorithm starts from practice, and the students independently complete the project design and implementation, and the teacher is only responsible for guiding the students, which can make the students have better professional practice ability, improve the teaching effect and efficiency, and also give teachers more time to engage in theoretical research. The decision tree algorithm is conducive to training comprehensive talents suitable for market demand. The teaching goal of environmental art and design specialty is to cultivate comprehensive talents suitable for market demand. From this perspective, the project teaching method has a positive effect on the realization of the goal. In the project teaching method, students are required to select projects for information collection, design plans independently, implement plans personally, and test their work results. The biggest role of this teaching method is to improve students’ practical ability, which is the greatest requirement of the market for environmental art design talents.
1716
J. Chen
3 Data Mining Data mining is a process of extracting hidden, unknown, and potentially useful information and knowledge from a large number of incomplete, noisy, fuzzy, and random practical application data. Synonyms similar to data mining include data fusion, data analysis, and decision support. This definition includes several meanings: the data source must be real, a large number of noisy discovery is the knowledge that users are interested in; the discovered knowledge must be acceptable, understandable, and applicable; the discovery is not required to be universally applicable and only supports specific development.
4 Algorithm Application 4.1
Construction of Decision Tree
The so-called “decision tree”, as the name implies, has a tree structure. According to different levels, nodes are divided into three types: root node, internal node, and leaf node. Each node corresponds to a sample set, in which the root node corresponds to the whole sample set, the internal node corresponds to a sample subset, and the leaf node corresponds to a class flag. Both the root node and the internal node contain a test of sample attributes. According to the test results, the sample set is divided into two or more subsets, each of which generates a branch, which is identified by the attribute value of the test. The leaf node contains a class flag that represents the class of the corresponding sample set. From the perspective of the leaf node, the decision tree divides the whole data space into several subspaces, and all samples belonging to one subspace are identified as the corresponding leaf node category [2]. 4.2
Decision Tree Analysis of Examination Results of Environmental Art Design Major
Definition: suppose the training set t contains n samples, which belong to m classes respectively, and the proportion of class I int is pi, then the Gini Index calculation formula of T is defined as: GiniðTÞ ¼ 1
m X
p2i
i¼1
Suppose attribute A divides set T into V subsets {T1 ; T2 ; . . .; Tv } and the number of samples is ni , then the Gini Index of this partition is: GiniðAÞ ¼
v X ni i¼1
n
GiniðTi Þ
The Application of Decision Tree Algorithm in Data Resources
1717
The feature selection strategy of Gini index is to select the smallest attribute of the split Gini index, which is suitable for training sets with fewer categories and tends to generate subsets of similar size [3, 4]. If we want to establish a decision tree, we can consider analyzing the influence of course type, whether to retake, whether to open papers, the difficulty of papers, and other attributes on it. Take the results of the 2008 courses of XX College as the test data. Part of the structure of the extracted examination table is shown in Table 1. Table 1. Examination results of students in some courses Course code 04014000 05023001 04024001 02011009 01021006 07032004
Repair or not Test paper difficulty Compulsory or not Examination results Yes High No 90 No In Yes 86 Yes Low No 76 No High No 86 Yes In Yes 95 Yes Low Yes 64
The research database found that: the data in Table 1 is too detailed to be classified directly, so the data should be cleaned first. The optimal classification can be selected according to the distribution of data categories of nodes, that is, the decision tree can be constructed by Gini Index. 210 2 950 2 580 2 Þ ð Þ ð Þ ¼ 0:576 1740 1740 1740 90 2 1000 2 300 2 Þ ð Þ ð Þ ¼ 0:431 GiniðT2Þ ¼ 1 ð 1390 1390 1390 GiniðT1Þ ¼ 1 ð
In this way, the gain course class is the smallest, indicating that the attribute plays the largest role in decomposing the data into subclasses. Therefore, a “course type” is established, and the sample is divided into four parts.
5 Environmental Art Design Specialty (1) Design project module. The first stage of the decision tree algorithm is to design the project module. Firstly, the students are divided into groups, and the corresponding project modules are designed and arranged according to the mastery degree of the students’ professional knowledge. The general project module should include the application of knowledge such as geography, climate, architectural structure, human history, and space scale proportion. (2) Design teacher guidance module. The second stage of the decision tree algorithm is to design a professional skills module. Although the decision tree algorithm is
1718
J. Chen
student-centered, to make students better complete the project, the teacher should provide guidance before starting the project, and the teacher should review the students’ design scheme to confirm the feasibility of the scheme, and the teacher should explain the relevant industry laws and regulations to help students modify inappropriate details Design. (3) Design comprehensive skills module. Therefore, it is the most important for teachers to teach comprehensive knowledge and practice in the field of the art design. (4) Design the teaching method of decision tree algorithm project teaching. 1. Practice inspection with students as the main body. Experimental teaching method. Under the guidance of the teacher, students choose the appropriate project. According to the given project, the planning department makes a reasonable design scheme and implements the scheme independently. Finally, the teacher accepts the project results and gives corresponding evaluation, and corrects the inappropriate behaviors and methods adopted by students in the implementation process, and puts forward the problems needing attention in the implementation plan, to prepare for the next practice.
6 Epilogue Decision tree algorithm in environmental art design major in Colleges and universities is to cultivate “employment-oriented talents” suitable for the market demand. The introduction of this teaching method conforms to the law of talent cultivation of the major, and truly achieves the teaching mode of “integration of teaching, learning and doing”. It not only improves students’ understanding of the industry but also improves their enthusiasm and practical ability.
References 1. Dong, C., et al.: Data mining and its application in university teaching system. J. Jinan Univ. (Nat. Sci. Ed.) 18(1), 65–68 (2004) 2. Dong, H.: Application of data mining in credit system teaching management. Educ. Informatization 4, 69–70 (2006) 3. Li, W.: Construction of multi-dimensional practical teaching system for art design major. Decoration 12 (2006) 4. Zhou, L.: Research on the “work study combination” education mode of art major in Higher Vocational Colleges. Education and occupation, 12 (2010)
The Construction of an Intelligent Learning System Under the Background of Blockchain Technology – Taking “Data Structure” as an Example LinLin Gong(&) Computer Science Research Labs of Shandong Xiehe University, Jinan 250107, Shandong, China [email protected]
Abstract. With the development of Internet technology and information technology, a digital intelligent learning system has brought a lot of convenience for information storage, transmission, and authentication process. However, the existing intelligent learning information system stores the data in the centralized database, which has a weak anti-risk ability for network attacks. The internal personnel of the system can also tamper with the data and erase the operation trace. Given the above problems, this paper applies blockchain technology to the construction of an intelligent learning system and studies the construction of intelligent learning systems under the background of blockchain technology. This paper compares and analyzes the concept of centralized management and decentralized management, points out the defects of the information centralized storage scheme of the intelligent learning system, and analyzes the shortcomings of the existing information system of blockchain intelligent learning system. Using appropriate blockchain technology and web technology, a four-layer blockchain application architecture is designed, which is composed of a blockchain layer, contract layer, data interaction layer, and application layer. We designed and developed the blockchain credit record module and certificate storage and forensics, realized the input and query of relevant information on the blockchain, and ensured the security and reliable traceability of information. The system construction based on the application of blockchain technology in intelligent learning is designed and developed, which takes the characteristics of decentralized blockchain technology and permanent preservation of data traces, records teaching information, and relevant operation records, and has the advantages of safe, reliable and reliable traceability, which makes exploration for the application of blockchain technology in the field of learning. Keywords: Intelligent learning Blockchain Decentralization Blockchain + learning Smart contract
1 Introduction On May 2018, general secretary Xi Jinping pointed out at the nineteenth China Academy of Sciences Academician general assembly and China Academy of Engineering fourteenth academician general meeting: “the new generation of information © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1719–1724, 2021. https://doi.org/10.1007/978-981-33-4572-0_254
1720
L. Gong
technology represented by artificial intelligence, quantum information, mobile communication, Internet of things and blockchain has accelerated the breakthrough application [1]. The world is entering a period of economic development dominated by the information industry”. With the popularity of bitcoin, the blockchain technology used behind bitcoin has gradually attracted people’s attention. The 2015 report of the world economic forum points out that by 2023, 10% of the global GDP will be stored in blockchain or blockchain-related technologies, and by 2023, government agencies will generally apply blockchain technology. McKinsey’s research report points out: blockchain technology is the core technology with the most potential to trigger the fifth wave of disruptive revolution after the steam engine, power, information, and Internet technology. Blockchain technology has also attracted the attention of the government. To promote the combination of blockchain technology and industry, national leaders and many government agencies have made public speeches or issued documents [2].
2 Blockchain Technology The data in the blockchain is stored in a data structure similar to a one-way linked list, and each block is composed of a block header and block body. Taking bitcoin as an example, the size of the block header is fixed to 80 bytes, including the version number of the current block, hash address of the previous block, Merkle root, timestamp, difficulty coefficient, and a random number. The block body records the transaction occupied bytes, transaction quantity, and transaction data stored in the block. The total size of each block is limited to 1 m. Each block retains the hash address of the previous block. By referring to the hash address of the previous block, a chain relationship is formed between blocks, that is, blockchain. As shown in Fig. 1.
Fig. 1. The data structure of blockchain
The basic concepts of blockchain are as follows: (1) Block: when a transaction occurs on the blockchain, the system will try to package the transaction data and put it into the currently generated block. The
The Construction of an Intelligent Learning System
1721
block contains two parts: block header and block body. The block header is used to link previous blocks and provide verification for the data integrity of the current block. The block body contains the records of transactions in the current block generation process. (2) Mining: blockchain is a decentralized distributed ledger technology, which inevitably involves the ownership of accounting rights. In the bitcoin system, proof of work (POW) mechanism is adopted. Each node consumes its computing power and tries to find a qualified random number (nonce). This random number needs to make the sha256 value of the blockhead have n leading zeros, and the number of zeros depends on the difficulty coefficient. If a node successfully calculates the results that meet the conditions, the node obtains the accounting right of the current block. This process of fighting for the right of accounting is called “mining” [3].
3 Construction of an Intelligent Learning System for Blockchain The whole system is divided into the application layer, data layer, contract layer, and blockchain layer. The blockchain layer replaces the traditional centralized database in this system, records important information such as text knowledge and teacher’s key points in class in the blockchain, which can not be tampered with and the writing trace can be traced, to avoid the data security problems caused by hackers’ intrusion or managers’ abusing their power for personal gain Trust. In the designed system, the smart contract layer provides a completely transparent and trusted transaction channel without a third party, defines the relevant data structure and algorithm rules, communicates with the blockchain layer through RPC, writes data information into the blockchain with the help of transactions, and realizes the function of appending and querying the data of the blockchain layer. The smart contract layer uses solidness as the development language, and meta mask wallet tool as the payment method for gas cost when deploying and invoking contracts on the blockchain. Data interaction layer in the system designed in this paper through the Web.JS Call the API of the smart contract layer to control the data writing and reading, and provide the function interface for the layer call. The data interaction layer uses Java and Node. JS Development, with the help of the spring framework and hibernate to quickly build the software framework, through web3.js call the contract layer to indirectly communicate with the blockchain layer, and through exposing rest API to supply layer calls. The application layer interacts with actual users through a friendly interface, provides specific services, such as registration and login, student registration record, certificate storage and evidence collection, community autonomy, and other modules, and communicates with the rest API of the data interaction layer with HTTP protocol [4]. There is a mapping relationship between the fields of the database table structure and the attributes of the entity object model. When the application layer performs CRUD operations to the database, the object entity model will establish a mapping
1722
L. Gong
relationship with the database table structure, mapping the attributes that need to be added, deleted, modified, and queried to the fields. According to the above object entity model, the database table structure can be constructed, including the user table, school table, specialty table, certificate table, administrator table, and discussion table. The user table mainly stores the basic information of system users, including number, creation time, modification time, ID card number, password, name, nationality, birthday, gender, and other information. The detailed design of the user table structure is shown in Table 1 Table 1. User table Field name id id_ card_ number password na me nation birth gender gmt_ created gmt_ modified
Field description User number ID card No password full name nation birthday Gender Creation time Modification time
Field type int varchar(32) varchar(16) varchar(8) varchar(8) date tincture DateTime DateTime
Can be empty Special Not Null Primary key Not Null Not Null Not Null Not Null Not Null Not Null Not Null Not Null
4 Intelligent Learning System Test of Blockchain About the construction of an intelligent learning system test environment, that is, the configuration of software and hardware on the server-side. The details of the test environment are shown in Table 2.
Table 2. Details of test environment Hardware configuration Processor: 2.3 ghz Intel Core i5 Memory: 16 GB 2133 MHz lpddr3 Hard disk: Macintosh HD 512gb Network: 100/1000 m adaptive wireless Ethernet card Graphics card: Intel iris plus graphics 640 1536 MB
Software configuration Operating system: Mac OS Mojave 10.14.5 Truffle: v3.1.1 Node: v6.9.2 JDK: v1.8.0 201 MySQL: v5.6
Pay attention to the degree of students’ intelligent learning. While constructing the environment of an intelligent learning system, the learning community pays special attention to the subject status and function of intelligent learning object students,
The Construction of an Intelligent Learning System
1723
especially the self-cognition and reflection of learning objects. Through systematic evaluation and diagnosis of students’ learning situations, we can improve students’ behavior or direction, combine self-assessment with other evaluations organically, and make the system become the process of students’ learning to practice and reflect, discover themselves, and appreciate others. Only when students fully realize the rationality of other people’s opinions and accept and internalize the evaluation opinions of others, can the blockchain learning system play its role in promoting the growth of students. The combination of the quantitative and qualitative evaluation model. After the introduction of Taylor’s goal-oriented traditional learning mode and the construction of the intelligent learning system standardized test in the 1980s, the construction of intelligent learning system replaced the traditional learning model, analyzed and expounded all the phenomena in students’ learning with data, and the empirical method monopolized the whole field of the learning system. As long as the intelligent learning system has mastered the data and statistical analysis techniques, it can understand everything about it.
5 Conclusions This paper mainly designs and studies the construction of an intelligent learning system based on blockchain. Firstly, the research background and current situation of blockchain and intelligent learning information systems at home and abroad are described, and the main research contents are proposed. This paper introduces the related technologies and basic principles of the intelligent learning information system based on blockchain and expounds the functions of these technologies in the system. This paper analyzes the research status, finds the theoretical basis, studies and summarizes the theory of centralized management and decentralized management, and compares the characteristics of the traditional intelligent learning information system and the intelligent learning information system based on blockchain in the technical and application aspects. System design and analysis, comprehensive use of blockchain, smart contract, Java Node.JS The teaching information management system is composed of four levels: blockchain layer, intelligent contract layer, data interaction layer, and application layer. The details of specific functional modules are elaborated, such as credit record, certificate storage and evidence collection, community communication, etc. Acknowledgments. (1.) Shandong Province Education Science Thirteenth Five-Yeaeplanning 2019 Year Generally self-funded projects: “Research on the Construction of intelligent Learning System in Universities under the background of Blockchain technology”, Project No.: YC2019074. (2.) The first batch of an industry-university cooperative education project in 2019: The teaching content and curriculum system reform of “Data Structure” in the information 2.0 era, Project No.: 201901140023.
1724
L. Gong
References 1. Xiong, W.: Research on credit authentication system based on blockchain technology. Beijing University of Posts and telecommunications (2018) 2. Yefei, Z.: The evolution path of blockchain core technology – the evolution of consensus mechanism (1). Comput. Educ. 04, 155–158 (2017) 3. Shenzhen Municipal People’s government: Some measures for Shenzhen to support the development of financial industry (SFG [2017] No. 2) 4. Xianmin, Y., Li Xin, W., Huanqing, Z.K.: Application mode and practical challenges of blockchain technology in education field. Mod. Dist. Educ. Res. 02, 34–45 (2017)
Sports Detection System Based on Cloud Computing Sheng-li Jiao(&) Wanjiang University of Technology, Ma’anshan 243031, China [email protected]
Abstract. In view of the fainting situation of modern Chinese students in the process of sports and sports body side, the wearable detection method is realized, and the real-time monitoring of students’ physical quality through cloud computing is realized, and the research and design of students’ physical fitness monitoring system is realized. The sensor, Android system and low-power Bluetooth technology are used to realize the light-weight and portable exercise load detection system. The data is transmitted to cloud computing to realize the sensor data processing, so that the step counting can be carried out, and the precise measurement of the movement energy consumption can be carried out through the linear relationship between the acceleration integral of human body motion and the consumption of motion energy source. Using the combination of exercise energy consumption and step rate, it can effectively meet the needs of students’ sports load detection, and also can provide students with scientific and effective sports standards, so that students can get the best exercise load, and help teachers set reasonable planning for students, and effectively improve the scientific nature of sports. Finally, the system is tested, which shows that the load designed in this paper has a high detection accuracy of sports load, which can fully show the students’ physical state and effectively improve the efficiency of students’ sports training. Keywords: Wearable bluetooth
Cloud computing Motion detection Low power
1 Introduction In the process of continuous progress of modern science and technology, it promotes the continuous progress of society, and cloud computing technology is widely used in various industries [1]. Network technology promotes modern industry services and application innovation, including sports. Wearable physical monitoring can realize the ideal state of cloud computing for students’ Sports physique monitoring system. Through wearable devices, the data of human physiological parameters are collected, and the network technology is used to automatically transmit to the cloud, and the data analysis and processing are realized, The results of data collection are sent to teachers and expert team for analysis. Finally, targeted diagnosis and scientific training guidance are given, which can effectively improve the effect of students in the process of physical exercise, avoid the occurrence of sports syncope, and realize the early warning © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1725–1729, 2021. https://doi.org/10.1007/978-981-33-4572-0_255
1726
S. Jiao
of sub healthy groups, and comprehensively monitor the exercise load and physical recovery of the whole process of sports. So it can realize the research of sports detection system based on cloud computing [2].
2 Cloud Computing Sports Detection Data Database Massive sports detection data is a group of nonlinear time series. The nonlinear time series analysis method is used to analyze the cloud computing. Assuming that the storage structure model of sports detection data in cloud computing environment is: Gð0Þ ¼ ðV; E; L; LE ; l; gÞ
ð1Þ
g : E ! LE
ð2Þ
It is the concept node of two distributed cloud computing feature mapping and sports data management: G1 ¼ M a1 ; M b1 ; Y1 ; G2 ¼ M a2 ; M b2 ; Y2
ð3Þ
Let A ¼ fa1 ; a2 ; . . .; an g, an be the fuzzy clustering center of the feature vector of massive university sports data, and construct the database structure model under the given cloud computing characteristic distribution structure. This paper assumes that the cloud computing storage database is classifiable, and introduces a physical data layer management factor b (0, 5), partition the concept lattice of the database, access and schedule the university sports data through the grid access mode, improve the data processing ability, and analyze the storage structure of the massive university sports data under the limited initial characteristic information.
Fig. 1. Path access graph of sports data stream mining
Sports Detection System Based on Cloud Computing
1727
In the scattered points of student sports detection database in cloud computing environment, the path access graph model of sports data stream mining is constructed by feature mapping, as shown in Fig. 1. Human motion information collection system includes four parts: microcontroller, acceleration sensor, wireless communication and data storage. System Collection refers to the collection of students’ sports information, and then real-time processing of the collected data, so as to obtain the athletes’ sports load information. Finally, the wireless transmission module is used to transmit the processed results to the cloud computing intelligent terminal for transmission, and then realize the display and recording of sports results. The system can also save the collected data, and then use the specific data to observe the details of human movement comprehensively. The basis of the system hardware design is to comprehensively monitor the students’ wear. The core of the monitoring design is to use the processor to realize the effective configuration of the power module, temperature sensor and heart rate sensor, and then effectively integrate them to comprehensively detect the coefficient of students’ movement. The terminal module of the system uses ARM processor, which has a high-speed running storage instrument inside. It has a comprehensive set of peripherals and ports, including multiple high-end timers and multiple communication interfaces. It has high performance and low cost.
3 Information Collection Module The system information collection module uses digital acceleration sensor as the chip of system acceleration measurement. Figure 2 shows the internal circuit structure of digital acceleration sensor. The three-axis acceleration sensor can change the physical
Fig. 2. Internal circuit structure of digital acceleration sensor
quantity of acceleration into electrical signal, then convert the electrical signal through AD, and then use digital filtering to transmit the measurement results to the processing center, and finally connect with the transmission data through SPI and IIC bus.
1728
S. Jiao
The NAND chip array organization is used to realize the design of data storage module. The memory space of the chip is 1 g. Figure 3 shows the structure of NAND chip array organization. The address line is 8-bit bus, and the address input is 5 cycles, In addition to data, address and command, the chip also has other functions such as we, CE, cle, etc. its main functions are chip, read-write, command latch, address latch, etc.
Fig. 3. The structure of NAND chip array
in the process of chip writing, it can realize data loading. The chip data loading is determined by the program, and the data programming is determined by the chip itself. Its reading and writing speed is relatively fast, which can effectively meet the system requirements. Figure 3 shows the interface design of a single chip [3].
4 Design of System Service Platform The human body is based on different ages, living habits and health level, and its suitable living environment is also different. The comfort degree of living environment Table 1. Relationship between human comfort evaluation model and parameters Comfort Comfortable Commonly Discomfort Extreme discomfort
Physiological parameters/PP Hc Rc Sc Hc Rc Sc Hc Rc Sc Hc Rc Sc
Environmental parameters/EP Ts Dc Ts Dc Ts Dc Ts Dc
Meet conditions PP = 3 EP < 2 PP = 3 PP < 3
has a certain impact on human physiological health. Therefore, before the design of system service platform, it is necessary to create human comfort model. Human comfort mainly includes various comfort models of environment and physiology, so as
Sports Detection System Based on Cloud Computing
1729
to realize the comprehensive combination of humidity, temperature, climate, region and race, and realize the creation of human comfort model. Table 1 shows the relationship between human comfort evaluation model and parameters. Network service platform belongs to the media to realize the service function, it has a particularly important role. When creating the network service platform, we should use the professional network design and server as the basis to create the network service platform equipment, so as to be able to run in each function module of the service platform, and effectively realize the user experience service and internal analysis of the management personnel. The circular management structure of system service platform can realize online and offline services, so as to expand the service scope. This management mode not only has the correct decision-making of leaders, but also needs the team culture. Therefore, when creating the system, we should realize the comprehensive development of management culture and improve the service quality. Wearable students’ physical fitness monitoring is mainly to wear on the students. Turn on the handheld terminal and turn on the computer and related software, then the students’ physique can be detected.
5 Conclusions This paper realizes the design of student sports physique monitoring system based on cloud computing. Through experiments, it shows that the system designed in this paper can comprehensively detect the physiological parameters of the whole process of students’ Sports before and after sports, so as to provide scientific guidance for physical exercise and avoid the occurrence of sports syncope. The system designed in this paper can not only evaluate the situation of students’ sports, but also promote the development of students’ physical monitoring and management, and effectively promote the development of students’ sports activities.
References 1. Li, B., Yu, S., Yang, X., et al.: Wearable exercise intensity monitoring system. Comput. Syst. Appl. 24(5), 32–39 (2015) 2. Huang, W., Li, M., Xu, Q., et al.: Design of wearable student physical fitness monitoring system. Electron. Technol. Softw. Eng. 16(4), 38 (2016) 3. Chunxiu, H.: Design of wearable student physical fitness monitoring system. Electron. Technol. Soft. Eng. 35(5), 64 (2017)
Analysis of the Combination and Application of Design Software in Computer Graphic Design Rong Li(&) Taizhou Polytechnic College, Taizhou 225300, China [email protected]
Abstract. In the daily study work, we only fully grasp the advantages of various software, in order to better complete the work, now through the graphic design process example to experience the CorelDRAW and Photoshop software features and mutual relationship. Keywords: Graphic design
Design software Photoshop CorelDRAW
1 Introduction With the rapid development of computer and network, computer graphic design is widely used in various industries: printing industry, advertising industry, corporate image design industry, enterprise clothing design industry, software industry, game industry, film and television industry and Internet industry. The requirement of computer graphic design talents is higher and higher. In the design, it is particularly important to master the operation of computer software quickly and skillfully.
2 Graphic Design Software Graphic design software can be divided into dot matrix image and vector graphics. The most famous software of dot matrix image is Photoshop, painter, firework, photostainer, picture, publish, etc. the characteristic of lattice image is fixed resolution, which is mainly obtained by scanner or digital camera; the most famous vector graphics software are illustrator, CorelDRAW, freehand, PageMaker, etc., vector graph is composed of Mathematics or lines, Description of vector graphics (including the fonts in various font libraries we usually use). One of the most commonly used Photoshop is an excellent image processing software developed by Adobe company. It is also one of the most widely used image processing software in the world, with very powerful image processing functions [1, 2]. This software supports a variety of color modes and can output different formats, which is convenient and fast quick to the image selection, and image editing, color adjustment and painting functions. The processed image can be stored in any format, and saved in JPG format, which can be transmitted on the Internet. In today’s popularization of computers, it can be carried easily. This is undoubtedly a powerful tool for © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1730–1734, 2021. https://doi.org/10.1007/978-981-33-4572-0_256
Analysis of the Combination and Application of Design Software
1731
advertising professionals and graphic design practitioners to create print advertising works. CorelDRAW software is a drawing software which integrates graphics drawing, printing and typesetting and text editing and processing. This software is characterized by accurate drawing, small storage, arbitrary scaling and no change of line thickness. It is widely used in graphic advertising design, CIS corporate image planning, indoor and outdoor decoration design, product packaging design, web design and printing plate making. It has a strong function in vector drawing. Corel Draw’s word processing function is also very powerful, which is used in a large number of word processing such as picture albums and newspapers. Although there are some similarities in some functions of the two software in graphic design, sometimes they can achieve similar effects, and even many functions can be applied in the two software. However, each software has its own characteristics and functions. If you want to create a high-quality and efficient graphic advertising design work, It is often realized by the alternate use of two or more software. For example: in the design of editing and processing of pictures, generally operate in Photoshop software, select the appropriate image, processing and reorganization; excellent interactive tools in Corel Draw can not be replaced by other software; it is unmatched to use the strokes in painter to simulate various painting effects, which is the technical advantage of the software, Only by fully mastering the advantages of various software, can we better complete the works by using our own spear, attacking the shield of others, and learning from each other’s strengths. Then, we will experience the respective characteristics and mutual relationship between CorelDRAW and Photoshop software functions through graphic design process examples.
3 Combination of Graphic Design and Software People contact graphic works, and can leave a deep impression, to a large extent, depends on whether the performance of the images in the works can catch the consumers and arouse the resonance of consumers. The image sources in the information age generally include digital photos, CD-ROM picture materials, pictures input through scanners, pictures downloaded from the Internet, etc. At the same time, the effect of the picture is not ideal, or the image is not clear enough, or the color is not uniform, the background is messy and so on [3, 4]. If the photo is not processed, the effect of the picture will be destroyed and the final overall performance of the work will be affected. In this case, only after the image processing can be applied in the design work, using the image processing software Photoshop to process the color of the picture and adjust the hue, hue and saturation of the background and main body of the picture, and cut the image, deal with the image effect, and synthesize the image. Under the adjustment of Photoshop software image menu bar, it is easy to get the desired effect. Of course, CorelDRAW software also provides such a function, but using it for image processing will cause inconvenience, and there are also large color deviation problems. Often the effect of the screen display and output of the work effect is thousands of miles. Therefore, CorelDRAW software is not usually used in the design to process images. In modern graphic design, font plays an important role in transmitting information, guiding visual attention and reflecting a certain design content. In font design, different
1732
R. Li
expression techniques are often used to deal with the changes. In the process of making characters, a considerable number of students are used to inputting words only with Photoshop software, and the production of longer characters in design is no exception. In fact, this method is not fast, and the effect is not ideal. There will be some weaknesses such as unclear text effect and hard to change. In the layout of the layout, due to the need to repeatedly adjust the layout and compare before and after, it is inevitable to deform, enlarge and reduce the text and image for many times. If we use Photoshop software to process, it will cause image blur and unclear handwriting. As shown in Fig. 1 and Fig. 2. (as shown in Fig. 1, it shows the effect after the dot matrix image software outputs, and Fig. 2 shows the effect of the vector graphics software. Figure 1 shows the result of 10 times magnification; Fig. 2 shows the effect of 50 times magnification.) In the multi text processing, we can use CorelDRAW software to import the image processed by Photoshop software, then input the text, and finally arrange and adjust the layout. For example, in teaching practice, when making new year greeting cards, the pictures in the cards can be processed by Photoshop software and imported into CorelDRAW software for inputting text and arranging images, words and graphics. Through such comparative training, students can understand the functional characteristics of the two softwares [5].
Fig. 1. Effect of dot matrix image software output
Fig. 2. The effect of vector image software output
4 Application of Software in Graphic Design Graphic design works in addition to images, text, but also includes the drawing and creation of graphics. In the design, graphics have a unique charm that can only be understood but can not be expressed. The role of communication and communication with people in various fields is a common visual symbol for human beings. The source of graphics in design: first, we can draw graphics according to the needs of enterprises, such as the drawing of enterprise trademark, logo, mascot, product modeling design, etc. It is better to design and make use of vector drawing software. CorelDRAW vector drawing software provides a lot of drawing tools, including toolbox, intelligent drawing
Analysis of the Combination and Application of Design Software
1733
tools, stroke types, spray cans, various graphic styles, etc., with strong selectivity and convenient operation. The second is the graphics and tables required by the logo and creative expression provided by customers, usually bitmap. For these existing graphics, you can input the graphics or tables through the scanner, and then import them into CorelDRAW software for redrawing, so as to enlarge and use in the future. Because CoreIDRAW depicts vector graphics, vector graphics can be output at the highest resolution on any output device, so that the output effect can be guaranteed. But after the bitmap is enlarged, the phenomenon of blurring and jagged edges will appear. Although Photoshop software also has similar functions, it is not as good as CoreIDRAW in terms of convenience and output effect when it is used to process graphics. CorelDRAW software is recommended to be the first choice in graphic drawing, because the graphics produced by Photoshop can be zoomed in and out of print advertising works of different areas without affecting the quality of the picture. The use of color in graphic design can improve the attention rate of design. When people see the design works, they first evaluate them from the color. Therefore, in the software color adjustment and output work color effect is very important for the design. The color can be corrected by Photoshop software. Most computer monitors adopt RGB mode. The image in this mode is composed of red (R), green (g) and blue (b), while the output printing mode is composed of four modes: cyan (c), magenta (m), yellow (y) and black (k), Therefore, we must correct the color of graphics before output. Photoshop provides a lab standard color mode, which is an intermediate mode from RGB mode to CMYK mode. Its feature is that the colors displayed are the same when using different monitors or printing devices. In addition, the color displayed in Photoshop software is more accurate than that in CorelDRAW software. The specific method: export Photoshop format to the works arranged by CorelDRAW software, and then open the file with photo shop software for color correction, so as to obtain satisfactory color effect.
5 Conclusions In the computer has become the main tool in our design and creation today, let it become our true friend. We should take the initiative to understand it, just as we know our friends, and more like “know ourselves, know the enemy, and be invincible in a hundred battles”. In the process of design and application, we should grasp the functions and characteristics of different design software to give full play to their functions and characteristics, show their magic power in the design and do their best to make the effect of computer graphic design works reach the ideal state.
References 1. Yi, F.: Excellent course construction and practice of photoshop graphic design: everyone, no. 14 (2011) 2. Lattice DIAM0ND design software launched by lattice semiconductor company 3. Lattice’s new hybrid signal design software simplifies platform management design. In: Electronic Design Engineering, no. 15 (2011)
1734
R. Li
4. Wang, P., Yang, H.: Teaching methods and reform of CorelDRAW course for art design major. Grand stage (07) (2011) 5. Lu, H., Qin, L., Zhang, H.: Some skills of coreidr aw [concave] (19) (2011)
The Application of Data Resources in Art Colleges and Universities Under the Big Data Environment Hong Liu(&) Shihezi University, Shihezi, XinJiang 832000, China [email protected]
Abstract. In today’s era, art colleges and universities inevitably have to intersect with big data, which has both opportunities and challenges. Starting from the introduction of big data, this paper studies and analyzes the current application status and problems of data resources in Colleges and universities, and makes some research and Analysis on the characteristics and shortcomings of art colleges. Keywords: Big data
Data resources Art colleges
1 Introduction Correlation refers to the quantitative relationship between objective things. In nature, there is always a certain connection between various things and phenomena. A significant feature of the era of big data is that the description of causality in data relationship analysis is replaced by the description of the correlation relationship. The correlation coefficient in statistics is a quantitative description of the correlation between two variables, which can be refined into descriptive data. In the era of big data, the correlation coefficient has an important application in the process of data analysis. We should actively study its definition and calculation method to provide a reference for practical application.
2 About Big Data With the continuous development and progress of science and technology, no matter where you are, people can feel all kinds of data everywhere: network, news, video, social circle, shopping Every moment, various types of massive data will be generated rapidly, and interact with each other. When you choose a product on the shopping website or choose your preference on the food platform, you are also contributing to the generation of big data [1]. At present, many companies or scientific research institutions have defined big data. Although we can’t give a unified definition like the hypertext markup language (HTML), we can still see one or two from the description of big data. Big data refers to the collection of massive data beyond the traditional category of data processing, storage, and analysis, These data have the characteristics © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1735–1739, 2021. https://doi.org/10.1007/978-981-33-4572-0_257
1736
H. Liu
of large amount, high speed, and diversity. The application range of big data is also very wide, from the city’s intelligent transportation system, power distribution system, to all kinds of e-commerce shopping and service platforms, to second-hand housing sales chain stores, supermarkets, and so on. The 5 V characteristics of big data: large amounts, high speed, diversity, low-value density, and authenticity. As shown in Fig. 1.
Fig. 1. 5 V features of big data
The world is talking about big data. In addition to its characteristics, the high efficiency and accuracy of big data in processing and solving various complex problems is also the reason why people pay more and more attention to it and rely on it more and more. For example, a well-known 211 university in China has realized the near field communication (NFC) technology on campus. Through mobile positioning and data exchange with NFC, we can quickly know all kinds of consumption, food, study, attendance, book borrowing, etc. of students in the school, By fusing it with all kinds of educational administration data in the data center and using big data mining technology for multi-channel analysis, we can even understand the linear relationship between students’ consumption and performance in a certain range. Without the support of big data technology, it is impossible to complete a series of complex system analysis with high efficiency and high accuracy if these data are scattered in various systems [2].
The Application of Data Resources in Art Colleges and Universities
1737
3 The Current Situation and Problems of University Data Resources Application in a Big Data Environment Big data is no longer limited to the field of technology. It has penetrated all aspects of life. As a carrier of social functions such as talent cultivation, scientific research, social service, cultural heritage, universities are naturally inseparable from the application of big data. Data resources in Colleges and universities are extremely in line with the characteristics of big data: (1) Volume (quantity), different from the tens of millions or even more customer transaction record data of securities companies or banking systems every day, a large part of data resources in Colleges and universities are subject resources, including various departments, laboratories, etc., and some are from students’ activities [3, 4]. (2) Velocity, in the early Internet era, the speed of data generation was not fast, but after the arrival of big data, the situation changed greatly. Take university news as an example, in the past, there were only a few news data in a day. Now, with the public account number and the public account number of secondary colleges and departments, the publishing speed has changed to dozens every day, or more than 100, If you take into account the information released by students on campus for sharing among friends, the speed of data generation is even faster. (3) Variety: various data resources in Colleges and universities include structured, unstructured, text, all kinds of audio-visual and other multimedia data in many forms. Especially in art colleges, all kinds of video and audio data can be subdivided into different majors, such as music, art, performance, design, etc. (4) For example, the weather of tomorrow and what to eat for lunch. However, these data still contain valuable information. On the contrary, we can analyze the way of making friends and so on. Also, to manage the uncertainty of data, we can filter the data through technology and management means, to get more accurate data resources.
4 Application of Correlation Coefficient Based on Big Data Background In the era of big data, the data scale is large and the relationship between data is complex. In statistics, several commonly used formulas can be used to calculate the correlation coefficient, which not only reflects the relationship between data but also is not limited by variable units, so it has a wide range of applications. According to the knowledge of probability, it can be concluded that:
1738
H. Liu
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n n 1X 1X 2 ðxi xÞ ; ry ¼ ðyi yÞ2 ; n i¼1 n i¼1 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n n n 1X 1X 1X ¼ ðxi xÞðyi yÞ=ð ðxi xÞ2 ðyi yÞ2 Þ n i¼1 n i¼1 n i¼1
n 1X rxy ¼ ðxi xÞðyi yÞ; rx ¼ n i¼1
It is concluded that, rxy
With the support of big data technology, we can easily obtain all the data of the research object and realize the collection and analysis of dynamic data. According to the relationship between the two variables, we can analyze the changing trend of things in a period in the future. Therefore, the traditional correlation calculation only tests the relationship between the previous data, while the correlation calculation under the big data technology can reflect the future trend of the data.
5 Characteristics of Art Colleges and Universities Compared with the general comprehensive colleges and universities, the digital media resources of art colleges are more abundant. Especially after the art science was promoted to the 13th discipline category in 2011, a large number of art data resources have emerged, and various art colleges are sparing no effort to develop their own subject data resources. Compared with the data resources of other disciplines, such as animation, film, and television data resources need to occupy more storage space and server resources than traditional text or document data resources. In the past, in the management of these art discipline resources, each department had its way, and the phenomenon of information island was obvious. A large number of server resources and storage space were not effectively used, and various discipline resources were not well spread or utilized, only in the small circle of each department. Given this situation, art colleges and universities should break the information island phenomenon as soon as possible, reconstruct their data resource pattern, conduct data fragmentation through different levels and dimensions, learn from and absorb the comprehensive utilization technology of data in the big data environment, establish an effective data center platform, and use the underlying data penetration and “cloud computing” architecture, Improve the flow rate of data and improve the utilization of valuable art data resources. The virtual server technology with large storage space can also solve the problems of idle space and server resources of data storage in each department and can recover important data resources in the shortest time when data is damaged or lost through virtual machine disaster recovery and backup technology. Art Colleges and universities can formulate a unified management mode of art data resources according to their situation. When the departments (professional industries) such as media, design, music need to establish or expand subject data resources, they can reasonably purchase or allocate storage space and server resources according to the unified planning of the school, This can not only make all kinds of art subject data resources run in an efficient overall framework, but also save considerable software and hardware procurement funds for the art colleges.
The Application of Data Resources in Art Colleges and Universities
1739
6 Conclusions Big data has brought us opportunities and various conveniences, enabling the explosive growth and accumulation of various data resources in Colleges and universities. It also allows us to obtain faster and more accurate analysis results than in the past through big data technology. At the same time, big data also brings us many challenges, such as data security, privacy protection, etc. colleges and universities, especially art colleges, should make rational use of the various conveniences brought by big data, give full play to their strengths, build and accumulate their own high-quality data resources, pay attention to avoid various risks, do a good job in security management at both technical and non-technical levels, and meet opportunities and challenges, On the road of information construction, the better.
References 1. Yuanzhuo, W., Xiaolong, X., Xueqi, C.: Network big data: current situation and prospect. Acta Comput. Sinica 6, 1125–1138 (2013) 2. Dengguo, F., Min, Z., Li, W.: Big data security and privacy protection. J. Comput. Sci. 1, 246–258 (2014) 3. Michaelschesberg, V., Kukeyer, K.: Big data era. Translated by Sheng Yangyan and Zhou Tao. Zhejiang People’s publishing house, Hangzhou (2013) 4. Yongmei, J., Zhonghua, N.: Research on correlation coefficient based on big data background. J. Shangqiu Vocat. Tech. Coll. 16(05), 68–71 (2017)
The Linear Capture Method of Tennis Forehand Stroke Error Trajectory Based on the D-P Algorithm You Sun(&) Sanya Aviation and Tourism College, Sanya 572000, China [email protected]
Abstract. To improve the efficiency of tennis forehand stroke, it is necessary to capture the error trajectory of the ball with a high resolution - P algorithm is used to capture the error trajectory of tennis forehand stroke. The linear image of the error trajectory of tennis forehand stroke is collected under the threedimensional visual model. The wavelet multi-scale decomposition method is used to filter the error trajectory of tennis forehand stroke, and the feature points of linear edge contour of tennis forehand stroke error trajectory are extracted. The gray histogram is used the feature extraction method is used to enhance the gray information of the linear image of the ball hitting error trajectory. Combined with the block feature matching technology, the key action feature points of the tennis forehand stroke error trajectory alignment are located P algorithm realizes the error adjustment and correction in the linear capture of the hitting error trajectory and realizes the correct capture of the trajectory line. The simulation results show that the accuracy of this method is high, the image output signal-to-noise ratio is high, and the action correction ability is strong Keywords: D-P algorithm Trajectory line capture
Tennis Forehand stroke Wrong action
1 Introduction With the development of image processing technology, it is more and more widely used in the field of sports training. The application of image processing technology in sports training is the key to improve the correction ability of sports action and improve the effect of sports training Tennis batting is complex and difficult. It needs real-time analysis and normative correction of technical actions to improve the ability of motion planning. Tennis forehand stroke is the key to score. The track alignment of tennis forehand stroke error is captured, and the trajectory analysis combined with image processing technology is carried out to improve the accuracy of tennis forehand stroke. Image processing method and calculation are used Computer 3D visual analysis method is used to capture the error trajectory of tennis forehand stroke, and the expert database is constructed for visual analysis, to capture and correct the error trajectory of tennis forehand stroke D-P algorithm is used to capture the error trajectory of tennis forehand [1]. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1740–1744, 2021. https://doi.org/10.1007/978-981-33-4572-0_258
The Linear Capture Method of Tennis Forehand Stroke Error Trajectory
1741
2 Image Acquisition and Preprocessing 2.1
Image Acquisition of Wrong Motion Track of Tennis Forehand
To capture and identify the error trajectory of tennis forehand stroke based on computer three-dimensional vision analysis, firstly, it is necessary to collect the digital characteristics of the error trajectory of tennis forehand stroke, and then use the digital imaging equipment to capture the error trajectory alignment of tennis forehand stroke, and combine with the computer three-dimensional vision acquisition method to collect images and feature points Analysis. The frame difference of the computer threedimensional vision acquisition is g′, and the sampling rate is “X = [1, −1], The segmented pheromone of the edge contour of the single-frame tennis forehand stroke error track is defined as G′. minF(x) ¼ ðf 1 ðx),f 2 ðx),. . .f m ðx))T s; t; g 0; i ¼ 1; 2; ; qð1Þ hj ¼ 0; j ¼ 1; 2; ; qð2Þ In the imaging sequence acquisition, assume that “y = [1, −1]T is the highfrequency part of the linear capture image of the tennis forehand stroke error track, “y = [gx, gy]. In the computer three-dimensional imaging space, the space invariant feature decomposition method is used. The construction of tennis forehand stroke error trajectory linear capture the texture information feature conduction model of action image is described as follows: c(x,y) ¼
X
w ½Iðxi ;
yi Þ Iðxi þ Dx; yi þ DyÞ2
ð3Þ
Where, ðDx; DyÞT is the trajectory of tennis forehand stroke error. The probability density function of the position distribution of the image in the air ðxi ; yi Þ is the coordinate point captured by the trajectory of the forehand stroke error. According to the transmission structure of the visual information characteristics, the background image B and the foreground image, I of the tennis movement are reconstructed. The noise distribution model of tennis forehand stroke error trajectory is obtained by scale decomposition and information fusion. The distribution module of tennis hitting trajectory is divided into ðW=2Þ ðH=2Þ sub-blocks, and the image information fusion capture equation is expressed as follows X ¼ V cos h cos /v; y ¼ V sin h
ð4Þ
Z ¼ V cos h cos /v; # ¼ wy sin y þ cos c
ð5Þ
Among them, xy, z are the distribution characteristic quantity of linear local information feature points of tennis forehand stroke error trajectory, to realize the
1742
Y. Sun
image acquisition of tennis forehand stroke error action track, and the acquisition model is shown in Fig. 1 [2].
Fig. 1. Image acquisition model of tennis forehand stroke error trajectory
2.2
Linear Filtering Preprocessing of Hitting Error Trajectory in Tennis Forehand Stroke Error Trajectory Linear Graph
Based on the linear image acquisition of the tennis forehand stroke error trajectory, the wavelet multi-scale decomposition method is used to filter the tennis forehand error trajectory. The noise is separated in the image area. The wavelet analysis method is used to obtain the multiple color difference kernel matrix of the tennis forehand stroke error trajectory (
f ðx1 ; x2 Þ ¼ r1 x1 ð1 Nx11 r1 Nx22 Þ ¼ 0 gðx1 ; x2 Þ ¼ r2 x2 ð1 r2 Nx11 Nx22 Þ ¼ 0
ð6Þ
In the above formula, r1 is the eigenvalue of the state correlation estimation of tennis trajectory, r2 is the correlation coefficient, r1 is the feature matching degree, and N1 is the linear component of the tennis motion track. Set h as the edge pixel set of the tennis forehand error trajectory alignment, the adaptive block feature matching method is used to carry out the hitting error trajectory linear image contour segmentation.
3 Optimization of Linear Capture of Forehand Error Motion Trajectory in 2 Tennis Linear feature extraction of tennis forehand stroke error trajectory Based on the wavelet multi-scale decomposition method, the error trajectory of tennis forehand stroke is captured. In this paper, a method based on the D-P algorithm is proposed to capture the error trajectory of tennis forehand stroke, and the edge
The Linear Capture Method of Tennis Forehand Stroke Error Trajectory
1743
contour feature points of the error trajectory of tennis forehand stroke are extracted. The mother wavelet function of linear filtering of tennis forehand stroke error trajectory is given [3, 4]. For k adjacent points, the map of the error trajectory of tennis forehand stroke is decomposed in the basic function of mother wavelet). For the mother feature points of tennis forehand stroke error action based on wavelet transform, the multi-scale wavelet decomposition method is used to filter the error trajectory of tennis forehand stroke, and the gray-scale pixel set of tennis forehand error trajectory is obtained.
4 Simulation Experiment Analysis To test the application performance of this method in the realization of tennis forehand stroke error trajectory line capture and feature extraction, the simulation experiment is carried out in MATLAB. The scanning frequency of the tennis forehand stroke error trajectory line is 16 kHz, and call Receive and other functions in MATLAB are used to collect the error trajectory of tennis forehand stroke. The feature matching of the tennis forehand error trajectory is carried out in 5 5 block mode. The sample set of tennis three-dimensional vision collection is 1 According to the above simulation environment and parameter settings, the simulation experiment of the tennis forehand stroke error trajectory is carried out.
5 Conclusions The accuracy of forehand stroke can be improved by capturing the error trajectory of the tennis forehand and analyzing it with image processing technology D-P algorithm is used to capture the error trajectory of tennis forehand stroke. The image of the error trajectory of tennis forehand stroke is collected under the three-dimensional visual model. The wavelet multi-scale decomposition method is used to filter the error trajectory of tennis forehand stroke, and the edge contour feature points of the error trajectory of tennis forehand stroke are extracted The method of square figure feature extraction is used to enhance the gray information of the linear image of the wrong stroke trajectory. Combined with the block feature matching technology, the key action feature points of the tennis forehand error trajectory alignment are located D-P algorithm is used to adjust and correct the error of catching the error trajectory of hitting the ball, to realize the correct capture of the track line. The research shows that the accuracy and real-time performance of this method are high
References 1. Bo, C.: Teaching design of tennis forehand stroke based on VR technology 2. Contemporary sports science and technology 8(22), 43–44 J (2018). modern electronic technology 41(11), 162–165 (2018)
1744
Y. Sun
3. Yong, X., Ye, L.: The effect of tennis racket’s line diameter on base line and the arm and elbow of holding racket effects of joints. J. Phys. Educ. 25(03), 134–139 (2018) 4. Zhang, Yu., Xiaoyan, W.: Low altitude target image detection based on mixed gray difference index measurement method. J. Electron. Measur. Instrum. 29(8), 1196–1202 (2015)
Study on Fracture Behavior of Modified Polymer Materials by Digital Image Correlation Method Xiangjun Wang(&) Ningxia Vocational Technical College of Industry and Commerce, Yinchuan 750021, Ningxia, China [email protected]
Abstract. Digital-image correlation method (DICM) has been developed in this paper. The initial estimates of both displacement and displacements gradients for Newton-Raphson iteration are studied. The new method of initial estimates, a real-time image subtraction scheme, and a micro-motion compensation mechanism being used to take zero as initial estimates are firstly presented and demonstrated. Some problems in the initial estimates and the convergence of Newton-Raphson iteration have been solved. The fracture behavior of PE composites is studied by using DICM. Keywords: Digital -image correlation method Real-time subtraction integral Stress-intensity factor Engineering fracture
J
1 Introduction In recent years, the study of polymer modification is very active, Some high-quality materials (such as corrosion-resistant, high-temperature resistant, high-strength, and high modulus materials and functional polymer materials with special optical, electrical, and magnetic properties) are used in oil pipelines, natural gas pipelines, oil tanks, aerospace and missiles In this paper, the macro fracture behavior of high polymer (corrosion-resistant modified polyethylene) was experimentally studied, It provides a reliable basis for the application of new structural materials (oil pipeline, aviation materials, etc.) and provides the experimental basis for the fracture strength design of new materials in engineering application [1]. The digital image correlation measurement method is a light measurement method developed with the continuous development of photoelectric technology, video technology, and computer vision technology Because of the advantages of this image measurement method, such as the simple acquisition of original data (speckle image), low requirement of measurement environment, direct measurement of two groups of information of displacement and strain, and easy to realize measurement automation, etc., Digital image correlation measurement method has become an important measurement method in the field of
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1745–1750, 2021. https://doi.org/10.1007/978-981-33-4572-0_259
1746
X. Wang
experimental mechanics. Important achievements have been made both in the improvement and improvement of the test method and in its applied research. The digital image correlation measurement method can be divided into a correlation search and correlation iteration. The measurement results of the correlation search method only include displacement parameters, The strain is obtained by deriving the displacement; the measurement result of the correlation iteration method contains two groups of information of displacement and strain, which is a full field deformation measurement method. At present, most domestic scholars use the correlation search method. Through the efforts of domestic scholars, the sensitivity of displacement measurement has reached 0.01 pixels [2]. In this paper, the experimental technique of the digital image correlation iterative measurement method is improved to obtain the zero initial value of iteration and improve the speed and accuracy of correlation iteration. The method is applied to the fracture behavior of modified polymers digital image correlation measurement method determines the displacement according to the probability and statistical correlation of the randomly distributed speckle field on the surface of the object before and after deformation.
2 Principle of Digital Image Correlation Measurement Method The digital image correlation measurement method determines the displacement according to the probability and statistical correlation of the randomly distributed speckle field on the surface of the object before and after deformation. The measurement process is that the speckle image reflecting the surface information of the object is recorded by the camera (CCD), stored in the frame memory through A/D conversion, and displayed on the monitor screen after D/A conversion. Generally, each image is 512 pixels 512 pixels matrix, The gray level of the image can be converted to 0–255 gray level by 8 bit A/d. The light intensity of the two digital speckle fields before and after deformation is respectively [3]. Before deformation: F(x i , yi ) After deformation: G(x*, y*) Here;
xi ¼ xi þ u þ
yi ¼ yi þ v þ
@u @u Dxi þ Dyi @x @y
@v @v Dxi þ Dyi @x @y
ð1Þ ð2Þ
Assuming that the displacement of the central point of the sub-region S of the speckle displacement field is u, v, and the strain is @u =@x , @u =@y , @v =@x , @v =@y , then the light intensity of any point (x, y) in the speckle pattern before deformation corresponds to the light intensity at (x + u + @@ux Dx + @@uy Dy, y þ v þ @@vx Dx þ @@vy Dy) on the deformed
Study on Fracture Behavior of Modified Polymer Materials
1747
speckle pattern, If its size is m-pixel m-pixel, then subset Su records the light intensity information of scattered spots randomly distributed around point P, which is defined as a two-dimensional sample space by statistics. After the displacement is generated, the spots in the original subset Su will be located in the corresponding position in the subset Sd , corresponding to the original spot one by one, forming another sample space The correlation coefficient C is. P f ðx; yÞgðx þ y Þ C ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P 2 P f ðx; yÞ g2 ðx þ y Þ
ð3Þ
Where: (x, y) and ðx þ y Þ are rectangular coordinates of point P in image 1 and image 2 respectively; f ðx þ y Þ and gðx þ y Þ are the light intensity of corresponding image subset respectively; P 2 P 2 f ðx þ y Þ are called autocorrelation functions and f ðx; yÞ and P f ðx; yÞ gðx þ y Þ are called co correlation functions. Here, when C = 1, the two sub-regions are completely correlated; when C = 0, the two sub-regions are not correlated. Similarly, we can also express the related index in another form, and define S as the correlation factor. P @u @u @v @v f ðx; yÞ gðx þ y Þ S u; v; ; ; ; ¼ 1 C ¼ 1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P P 2 @x @y @x @y f ðx; yÞ g2 ðx þ y Þ
ð4Þ
When S = 0, it is correlated; when S = 1, it is not. The correlation factor S is a function of the displacement and its derivative to be calculated. When the given trial displacement and first derivative are substituted into Eq. (4), the trial value that makes the correlation factor reach the minimum value is the displacement and derivative of the real sample image. It can be seen from Eq. (4) that the necessary condition for S = Smin is. Sj ðu1 ; u2 ; . . .; u6 Þ ¼ 0
ð5Þ
Where: j ¼ 1; 2; 3; 4; 5; 6; u1 ; u2 ; . . .; u6 denotes u; v; @@ux ; @@uy ; @@vx ; @@vy ; Sj ¼ @@uS : respecj
tively; In this paper, the Newton-Raphson iterative method is used to solve the partial differential Eq. (5).
3 Subpixel Reconstruction If the gray value of xi ; yi are independent of each other, it means that if the gray value of the CCD and the gray value of (I) are independent of each other, then the gray value of the CCD can not be obtained directly, In this paper, bilinear interpolation is used to solve gðxi ; yi Þ [4].
1748
X. Wang
Only when the initial value of the unknown variable is close to the actual value, the Newton iterative process converges according to the square speed [2]. If the initial value is not selected properly, the Newton iteration may diverge. Therefore, it is necessary to estimate the initial value reasonably. Some researchers have used the relevant search method to estimate the initial value [3], but this requires a lot of computer time, In many applications, the strain is small and the displacement may be very large, but it is mainly caused by the displacement of the rigid body. The relative motion between sets is controlled within a very small value (not more than one-pixel distance for displacement). Generally, the actual deformation of the specimen can be ignored and the displacement and strain values can be regarded as zero. According to the above digital image correlation theory and analysis, a program can be compiled to calculate the correlation coefficients S and ui by Newton iteration. In the experiment of using the digital image correlation method to eliminate and suppress the error, the author analyzes and studies the characteristics of speckle image noise, and puts forward the method of multi-image average and statistical data processing to suppress the random error of digital image correlation measurement method, The sensitivity of the digital image correlation measurement method is improved. The displacement sensitivity is within 0.04 pixel, the sensitivity of positive strain is within 10−4, and the sensitivity of shear strain is 16 10−5.
4 Experiment In this paper, an experimental study on a model modified polyethylene (PE) symmetrical tensile specimen with bilateral cracks is carried out by using the digital image correlation method. The deformation around the crack is measured by the digital image correlation method, and six deformation components around the crack are directly obtained. The measured displacement derivatives are numerically integrated by using the displacement derivative form of J integral expression, The J-the integral value of the tested component under a certain load condition can be obtained.
5 Experimental Process The specimen size is shown in Fig. 1. The elastic modulus of the PE specimen is 766.339 MPa and Poisson’s ratio is 0.417. The load in this experiment is P = 44.15 n.
Study on Fracture Behavior of Modified Polymer Materials
1749
Fig. 1. Dimensions of the specimen (unit: mm)
6 Experimental Results and Analysis The displacement field and strain field near the crack tip can be directly obtained by using the original speckle pattern and the calculation of the relevant iterative program. The test area is the scanning area in Fig. 1, with a size of 8 mm 8 mm. The experimental results are shown in Fig. 2. The strain ex ; ey , cxy . in the scanning zone near the crack tip of S31 of the modified polyethylene specimen are given in Fig. 2a–c, The resolution of the image is 60 pixels mm−1.
Fig. 2 Contour plots for strain field around the crack tip
1750
X. Wang
7 Conclusions In this paper, the basic principle of the digital image correlation iterative method is discussed, and the experimental technique of real-time subtraction iteration to assign zero initial value is proposed. The calculation speed and accuracy of correlation iteration are improved. The fracture behavior of polymer materials is studied for the first time by using this method. The displacement and strain of PE specimen with bilateral cracks under a certain load are obtained, and the J–the integral value of PE specimen under this load is obtained, It provides an effective means to study the fracture of polymer materials.
References 1. Jiabai, R., Guanchang, J., Bingye, X.: A new digital speckle correlation method and its application. Acta Mechanica Sinica 26(5), 599–607 (1994) 2. Kang, F.: Numerical Calculation Method. National Defense Industry Press, Beijing (1978) 3. Xia Bruck, H.A., Mcneill, S.R., Sutton, M.A., et al.: Digital image correlation using NewtonRaphson method of partial differential orrection. Exp Mech 29(3), 261–267 (1989) 4. Shuangzeng, Y.: Fracture and damage theory and its application. Tsinghua University Press, Beijing (1992)
Hydraulic Driving System of Solar Collector Based on Deep Learning Yulin Wang(&) Shanghai Electric Power Generation Engineering Co., Shanghai 201100, China [email protected]
Abstract. Renewable energy refers to the energy that can be regenerated by raw materials, such as hydropower, wind power, solar energy, biological consumption (biogas), and ocean tide energy. This paper introduces the operation conditions and design scheme of the hydraulic driving system of the solar photothermal power collector. With the strengthening of global environmental protection, researchers all over the world are committed to the utilization and research of renewable energy. For China, coal is the main energy production structure for a long time, and the problems of energy shortage and environmental pollution have become increasingly prominent, which have become two major problems restricting the sustainable development of China’s economy. Therefore, it is advocated to optimize the energy structure and vigorously develop renewable energy such as solar energy. Keywords: Clean energy Solar energy Concentrating type Collector drove
1 Introduction The concentrating solar photovoltaic power generation system includes a reflector, collector, torque tube, intermediate support, end support, collector moving system, and support arm. The movement of the collector is a hydraulic driving unit. The hydraulic unit has two oil cylinders to drive the support movable arm in the expected direction. The collector is driven to track the sun from sunrise to sunset. Each collector rotates around the rotation axis driven by two hydraulic cylinders in its daily tracking sun and transmits torque through the movable arm, The design of the system is that the collector moves freely from −22° below the eastern horizon to the maximum 180° above the 2.5° horizon [1, 2].
2 Operating Conditions of a Hydraulic Driving System for Heat Collector During the operation of the collector, the center of gravity of the collector coincides with the axis. Therefore, the load torque of the collector is very small when there is no wind. Taking a reference project as an example, the maximum operating wind speed is 14 m/s. However, the structure must be able to withstand the wind speed when the © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1751–1755, 2021. https://doi.org/10.1007/978-981-33-4572-0_260
1752
Y. Wang
collector reaches the residence position. Therefore, the foundation must bear the force generated by the wind speed of 20 m/s behind the windbreak. The vertical height from the center of rotation of the collector to the ground is (3225 ± 2) mm, the distance between the collector tube and the center of rotation of the collector is 1551 mm; the length (east-west direction) and width (north-south direction) of the driving bracket is about 910 mm 860 mm.
3 Design of a Hydraulic Driving System for Heat Collector The operation process of the hydraulic driving system of the heat collector is shown in Fig. 1. The principle of the collector hydraulic drive system is shown in Fig. 2.
Fig. 1. Operation process
The hydraulic driving system of the heat collector is composed of an oil supply device, two hydraulic cylinders, and a control block assembly [3, 4]. 1) Oil supply unit. System pressure: 14–19 MPa, gear pump of the hydraulic system operates intermittently, and energy is released by the accumulator in daily operation. When the system pressure is 14 MPa, the pressure switch PS1 is triggered and the pump is started. When the system pressure rises to 19mpa, the pressure switch PS2 is triggered, and the pump is cut off and stopped. The heater is equipped with a heater. When the temperature is lower than 4C, the heater is started to heat the oil. It is equipped with an air filter element, liquid level gauge, oil drain valve, safety valve, pump outlet filter element, pressure switch, accumulator, and pressure gauge. 2) Hydraulic cylinder. Hydraulic cylinder line: L = 8602 mm; Cylinder diameter: D = 150 mm, r = 75 mm; Piston rod diameter: D = 80 mm, r = 40 mm Maximum volume of rodless cavity: 75 2 860:2 V1 ¼ pR L ¼ p ¼ 0:01519m3 1000 1000 2
Hydraulic Driving System of Solar Collector
1753
Fig. 2. Schematic diagram of the collector hydraulic drive system
Maximum volume of rod h cavity: 40 2 i 860:2 2 2 75 2 3 V2 ¼ pðR r ÞL ¼ p 1000 1000 Total volume (total oil 1000 ¼ 0:01m consumption from −30° to 210° once completed): V ¼ V1 þ V2 ¼ 0:02523m3 ¼ 25:23L The required time (maximum stroke) for the maximum range of fast operation is 15 min. Flow rate: Q = V/t = 2523/ 15 = 1.682 L/min = 1682 mL/min The minimum output force of rodless cavity: F1 = P S = 14 0.0177 1 000 000 = 247275 N The minimum output force of the rod cavity: F2 = P S = 14 0.0126 1 000 000 = 176939 N 3) Control block. The control block assembly consists of a three-position four-way solenoid valve and balance valve.
4 Concentrator According to the optical principle, the concentrator can be divided into a refraction concentrator and a reflection concentrator. In concentrating photovoltaic technology, the Fresnel lens is the main refractive lens, which has the characteristics of lightweight and thin thickness. For the concentrating system using point focused Fresnel lens, the
1754
Y. Wang
focusing multiple is usually more than 500 times. Using a high-efficiency multijunction gallium arsenide battery, the module efficiency can reach more than 25%. The reflective concentrator is mainly a mirror reflector, which is made into long strips or disks according to the different concentrating times. With the increase of the concentration multiple, various kinds of new concentrating systems are constantly introduced. This kind of concentrating system usually adds a secondary concentrator under the concentrator, to achieve the purpose of making the spectrum more uniform, reducing the light loss, and reducing the distance between the concentrator and the battery.
5 Prospect of Solar Thermal Power Generation Technology Generally speaking, solar thermal power generation is in the initial stage of industrialization, and its high power generation cost restricts the large-scale application of this technology. First of all, the low energy flow density of solar energy requires a large number of light-reflecting devices and heat receiving devices, which account for about half of the investment in power plants. The power generation efficiency of the solar thermal power generation system is low, and the net power generation efficiency is less than 15%. The lower power generation efficiency needs more concentrating heat collection devices, which increases the investment cost. Finally, solar thermal power generation can not run for 24 h, so heat storage devices should be added to increase the cost. Therefore, solar thermal power generation needs policy support. The government should promote and approve normative engineering projects in some areas with good solar radiation, and give preferential policies to relevant enterprises; to reduce production costs, it can encourage the development of supporting industries.
6 Conclusions Through the above analysis, we have a certain understanding of the collector hydraulic drive system, which is of great significance for the subsequent domestic research of the collector hydraulic drive system. Concentrating solar thermal power generation is a new type of power generation. Solar radiation is gathered at a point or a line through a reflector and a large amount of heat energy is obtained. Then the heat energy is converted into high temperature and high-pressure steam by conversion, and then the steam turbine is driven to generate electricity. Compared with photovoltaic power generation, solar thermal power generation technology is more energy-saving and environmental protection and uses a physical way for photoelectric conversion. At present, concentrating solar thermal power generation is mainly divided into trough type, tower type, and dish type. Due to the immaturity of solar thermal power generation technology, there is still great potential for further optimization of power plant design and reduction of power generation cost.
Hydraulic Driving System of Solar Collector
1755
References 1. Junyi, W., Xue, X.R.: Solar Energy Utilization Technology. Jindun Publishing House, Beijing (2008) 2. Xin, C., Haitao, F.: Development status of solar thermal power generation technology. Energy Environ. (1), 36–39 (2012) 3. Yongping, Y., Yong, Z., Rongrong, Z.: Tower solar-assisted coal-fired power generation system solar energy contribution. J. North China Electr. Power Univ. 43(3), 56–64 (2012) 4. Jingxiao, H., Yongping, Y., Hongjuan, H.: Progress in sensible heat storage technology of solar thermal power generation. Renew. Energy 32(7), 901–905 (2014)
Design and Simulation of a Welding Wire Feeding Control System Based on Genetic Algorithm Zeyin Wang(&) Gansu Mechanical and Electrical Vocational College, Tianshui 741001, Gansu, China [email protected]
Abstract. According to the requirement of uniform and stable wire feeding in the welding process, speed and current double closed-loop speed control system is designed. The genetic algorithm is selected as the control strategy and combined with the traditional PID control to realize the on-line tuning of PID control parameters. The simulation results show that the PID tuning based on the genetic algorithm is better than the conventional PID algorithm, and the wire feeding control system meets the requirements of constant speed. Keywords: Double closed-loop
Genetic algorithm PID parameter tuning
1 Introduction The wire feeding system is an important part of welding equipment. The accuracy and stability of wire feeding are directly related to the stability and quality of the welding process. At present, most of the welding wire feeding systems adopt single closed-loop control with negative voltage feedback or negative voltage feedback plus positive current feedback, which has the disadvantages of unstable wire feeding and poor dynamic characteristics. To solve this problem, a double closed-loop speed control system based on genetic algorithm PID self-tuning control is designed. The double closed-loop is composed of a speed regulator and current regulator, which are connected in series. This design can meet the requirements of the welding wire feeding system for fast start-up speed and strong load capacity and can reduce or eliminate the phenomenon of arc length jitter caused by unstable wire feeding speed. PID control has the advantages of simple structure, good stability, high reliability, easy engineering implementation, and strong robustness. It is the most widely used control strategy in the wire feeding control system. But the control effect depends on the parameter setting and optimization. The PID parameter tuning control strategy based on a genetic algorithm [3] is adopted to realize optimal control [1].
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1756–1760, 2021. https://doi.org/10.1007/978-981-33-4572-0_261
Design and Simulation of a Welding Wire Feeding Control System
1757
2 Design of a Wire Feeding Control System Dsptmsl2812 microprocessor is selected as the control core, an excellent micromodule is used as a PWM driving circuit, Maxon DC motor is wire feeding motor, photoelectric code disk is selected as speed detection element and MD 18200 is used as armature current detection element. The wire feeding control system is shown in Fig. 1.
Fig. 1. Structure box of welding wire feeding system
The microcontroller dsptmsl2812 outputs PWM pulse, which is used as accent driver input after photoelectric isolation. The internal driving circuit adopts H-bridge reversible PWM drive, and the output voltage is directly supplied to the armature to drive the wire feeding motor to run at a certain speed. Since the output speed of the DC motor is directly proportional to its driving voltage, the duty cycle of the PWM output of the microcontroller is adjusted by CCS2 software programming of the upper computer, and then the output voltage of the driver is changed to achieve the purpose of speed regulation [2]. Most wire feeding control systems adopt open-loop or single closed-loop control, which have poor speed stability and anti-interference ability. Therefore, the speed and current double closed-loop speed control system is adopted in this system. Through the feedback of the detecting element to the microcontroller, through the calculation of a certain intelligent control algorithm [4], a PWM pulse with a certain duty cycle is obtained to compensate for the speed change caused by voltage or resistance, to ensure the wire feeding motor to obtain a stable speed.
3 PID Tuning Principle Based on Genetic Algorithm Genetic algorithms (GA) is a parallel global optimization algorithm which simulates natural evolution and biological genetic mechanism. It includes three basic elements: parameter coding, fitness function, and genetic operation. Parameter Coding GA does not operate the parameters to be optimized, but the genetic operation on the coding of parameters. Therefore, the parameters to be optimized should be coded into
1758
Z. Wang
code suitable for genetic operation. Three decision variables KP, Ki, and KD are represented by a 10-bit binary code string. Fitness Function The genetic algorithm determines the optimization direction according to the fitness value of an individual chromosome. To obtain satisfactory dynamic characteristics of speed regulation, the time integration performance index of absolute error value is used as the minimum objective function. To prevent excessive control quantity, the square term of output is added into the objective function: 1 Z
J ¼ ðw1 jeðtÞj þ w1 uðtÞÞdt ¼ w3 t1
ð1Þ
0
At the same time, to avoid overshoot, overshoot is introduced as one of the optimal indexes: 1 Z
J ¼ ðw1 jeðtÞj þ w2 uðtÞ þ w4 jeyðtÞjÞdt ¼ w3 t1
ð2Þ
0
Where eðtÞ is the system error; uðtÞ is the controller output; t1 is the rise time; w1 ; w2 ; w3 ; w4 are weights, and w4 [ [ w1 . Genetic Manipulation [3] The basic operations of the genetic algorithm include copy, crossover, and mutation. To maintain the diversity of the population, the fitness ratio method was used to copy, and the fitness value was obtained through the fitness function, and then the replication probability Pm corresponding to each coding string was calculated; the single point crossover was conducted, and the crossover probability was Pc ; finally, the probability Pm was used to select the gene loci to change it from 1 to 0 or from 0 to 1, to maintain the diversity of the population.
4 Experimental Simulation The simulation model of the wire feeding motor speed regulation system is established by MATLAB, and the PI control strategy based on genetic algorithm for the current regulator and speed regulator is simulated respectively, as shown in Fig. 2. The main parameters of the system model are as follows: TL (electromagnetic time constant) = 0.05 s; TM (electromechanical time constant) = 0.158 s; R (total armature resistance) = 1.5 w; a (voltage feedback coefficient) = 0.0067; B (current feedback coefficient) = 0.05; K is a constant.
Design and Simulation of a Welding Wire Feeding Control System
1759
Fig. 2. Dynamic simulation of speed current double closed-loop wire feeding system
4.1
Simulation Experiment of Step Response of the Current Loop
The number of samples is size = 30, the crossover probability and mutation probability are: PC = 0.60, PM = 0.033. Taking W1 = 0.999, W2 = 0.001, W3 = 2.0, W4 = 100, after 100 generations of evolution, the optimized parameters are: KP = 0.32, Ki = 25, performance index J = 24.9812, and the step response is shown in Figs. 3 and 4.
Fig. 3. The step response curve of the current
Fig. 4. The step response curve of the current loop (b)
Figure 3(a) shows the step response of the PI control system without optimization, the response curve slightly overshoot 4.4403%, and the peak time (0.00209 s) is very short. (b) The step response of the PI control system optimized by the genetic algorithm has no overshoot and the response ends in 0.04 s. The step response effect of the latter should be better than that of the former. 4.2
Simulation Experiment on Step Response of Speed Loop
The number of samples is 30, and the probability of crossover and mutation is Pc = 0.9 and Pm = 0.033, respectively. Taking w1 = 0.999, w2 = 0.001, w3 = 2.0, w4 = 100, after 100 generations of evolution, the optimized parameters are: kp = 5.3, ki = 60.97, performance index J = 23.9936. The PI step response is shown in Fig. 5.
1760
Z. Wang
Fig. 5. The response speed of the step loop
It can be seen that the step response performance index of the PI control system without optimization: overshoot is 67.7645%, peak time is 0.0258 s. The step response performance index of the PI control system optimized by the genetic algorithm: overshoot is 34.1602%, peak time is 0.0897 s, the latter overshoot is small and the peak time is long, which meets the system requirements.
5 Conclusions In this paper, a double closed-loop wire feeding control system is designed, and the basic principle of a genetic algorithm is introduced. Based on the conventional PI control algorithm, the PI tuning based on a genetic algorithm is designed and the system simulation model is established. The simulation results show that the wire feeding control system has good speed stability, and the speed regulation performance is better than the traditional PI control, which can realize constant speed wire feeding.
References 1. Yan, L., Jiluan, P., Hua, Z.: Control system of MIG/MAG welding wire feeder. Acta Weld. Sinica, 12(1), 59–63 (1991) 2. Jirong, Q., et al.: Modern DC Servo Control Technology, and System Design. China Machine Press, Beijing (2002) 3. Xiaoping, W., Liming, C.: Genetic Algorithm: Theory, Application, and Software Implementation. Xi’an Jiaotong University Press, Xi’an (2002) 4. Yonghua, T., Yixin, Y., Lusheng, G.: New PID Control and its Application. China Machine Press, Beijing (1998)
Color Transfer Algorithm of Interior Design Based on Topological Information Region Matching Quan Yuan(&) Dalian Institute of Art and Design, Dalian 116600, China [email protected]
Abstract. According to the characteristics of large color difference and rich color among interior scene regions, an interior design color transfer algorithm based on topological comprehensive information to guide region matching is proposed. Firstly, the interior design image is segmented, and the topological information of each area is calculated. Then, the regional matching relationship is determined according to the topological structure information to improve the accuracy of color transfer; Secondly, the color transfer is carried out between matched regions, and color adjustment is carried out for the regions without corresponding input color to improve the integrity of color transfer; finally, color harmony algorithm is used to reduce the influence of noise generated in color transfer on color transfer results, The experimental results show that the algorithm can preserve the color richness of the image globally, and obtain better matching relationship and color adjustment results locally, to obtain better color transfer results. Keywords: Color transfer vector Color harmony
Interior design Regional topology Matching
1 Introduction Color transfer is a research direction with important application value in the field of image processing and computer vision. Through the method based on statistics or color model, the color in the target image is transferred to the original image, so that the color transferred image and the target color image have similar color statistical information, and then the same color impression is transmitted to the original image to realize the image The existing research on color transfer can be divided into two categories: the first is the global color transfer; the second is based on local and partial cuts [1]. In the local color transfer algorithm, the brightness of the region and the mean value of the degree is used as the matching index to match the regions with similar brightness, A single brightness as a matching index can not get a relatively reasonable matching result. At the same time, when the number of regions is not the same, there is no reasonable color adjustment method for the noncorresponding matching areas. In this paper, according to the characteristics of large color difference and relatively rich color between the regions in the indoor scene, and considering the regional functional © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1761–1765, 2021. https://doi.org/10.1007/978-981-33-4572-0_262
1762
Q. Yuan
attributes, this paper proposes a color transfer algorithm for interior design based on topological information. This algorithm not only overcomes the shortcomings of the global algorithm for color loss in color transfer between color-rich images but also overcomes the disadvantage that the local color transfer algorithm only takes the average brightness as the matching index, which leads to a poor regional matching effect, to improve the interior design The purpose of the final effect of color transfer [2, 3].
2 The Algorithm Flow of This Paper The algorithm steps are as follows: Input. Original image and target color image Output. Color transfer image The input image is divided into regions and analyzed. The topology information is used to determine the region matching correspondence by matching strategy. If the matching result is not satisfied, the user interaction can be used to determine the initial matching area. In this way, the following matching process is limited according to the topological adjacency. Improve the accuracy of color transfer. Step 2. The base for completing the color transfer of the existing matching corresponding region. Based on no corresponding matching area color adjustment, improve color conversion. The integrity of the move. Step 3. Use the method of color harmony to solve the problem of color transfer. Into the noise and the image of the small color unadjusted area of the color discord. Harmony and other issues, enhance the harmony of color transfer.
3 Region Matching Based on Topological Synthesis Information Because of the characteristics of large color difference and color phase between interior scene regions, the global color transfer algorithm can only obtain relatively monotonous color transfer results [2, 3]. In this paper, through the regional segmentation of interior design image, the topological information of the region is analyzed, and the region matching is guided, to obtain a more reasonable matching corresponding relationship. In this paper, we use the grab cut algorithm proposed by rother et al. For region segmentation, this algorithm uses the texture information and boundary information in the image to establish a Gaussian mixture model. As long as a small amount of user interaction, we can get better segmentation results. The relative color relationship between regions is the main factor that affects the impression of interior design. Therefore, this paper considers the topological relationship between color regions, and studies the relationship between color regions by constructing a
Color Transfer Algorithm of Interior Design
1763
topological connection diagram between regions. The topological information vector V of each region is defined as the proportion n of the number of adjacent regions n. The proportion of the area R and the relative position l of the region relative to the image center L is composed of the relative color difference s between the adjacent regions; Represents the topological matching vector of the ith region in the form of Vi ¼ ðNi ; Ri xi Li xS Si Þ, where, 8 > > > > >
m > P > > Rj Dðci ; cj Þ=256 > : Si ¼
ð1Þ
i2w
ni is the number of adjacent regions in the i-th region, nt is the total number of image regions, ai is the area of the i-th region, IW is the image width, Ih is the image height, Li is the average distance from the color point of the i-th region to the image center, LP is the image diagonal length, D (ci, cj) Represents the difference of color mean value between the i-th region and the j-th region in lab color space. The j-th region belongs to its adjacent region subset w; xL ; xS are the correction coefficients (take xL = 0.2, xS = 0.2). To obtain the matching relation F, the energy function is used F ¼ min F
2 X ðViS Vjt Þ
ð2Þ
In this paper, Euclidean distance is used to calculate the distance between the two weight matching vectors, where ViS is the matching vector of the i-th region in the original image, and Vjt is the matching vector of the J-REGION in the target color image.
4 Color Transfer and Color Adjustment in Unmatched Regions After obtaining the matching correspondence, there are two situations according to the number of regions divided. 1) The number of regions in the target color image is greater than or equal to the number of regions in the original color image. According to the corresponding matching relationship, the pixels in the region are color transferred according to the color transfer function in the literature in the lab color space. If the I region in the original image matches the j region in the target image. Then the calculation formula of the value ðIz0 ; a0z ; b0z Þ of each channel after color transfer in the region is
1764
Q. Yuan
8 0 < Iz ¼ ðIz lisl Þrisl =risl þ lisl @ 0 ¼ ð@z lis@ Þris@ =ris@ þ lis@ : z0 bz ¼ ðbz lisb Þrisb =risb þ lisb
ð3Þ
5 Optimization of Color Transfer Results Based on Color Harmony By calculating and selecting the appropriate color transfer template and the appropriate rotation angle of the template, the algorithm achieves the purpose of a relatively harmonious image color under the premise of a small change of image color. The steps are as follows: Step 1. Convert the image to HSV color space, extract the color ring diagram of the image in the hue (H) channel, and follow the X HðpÞ Er ðtÞ ðpÞ ðpÞ FðX; ðm; tÞÞ ¼ ð4Þ p2X
The color disharmony coefficient of each template under the corresponding rotation angle is calculated to solve the value of s. When the discordance coefficient is the minimum, it is the appropriate template rotation angle. In formula to (4), X represents all color points in the image; H (P) represents the color points with hue value p; Er ðtÞ ðpÞ shows the boundary hue value of the minimum arc distance from P when the rotation angle is r in the m template; S (p) is the area proportion of the color points whose hue value is p; II | is calculated by the arc distance on the hue ring. If the hue value p is inside the color harmony template, then HðpÞ Er ðtÞ ðpÞ was 0. Step 2. After calculating the appropriate rotation angle of each template, the template Tm with the lowest discordance coefficient is selected as the target template, and the color harmony template is rotated according to the optimal rotation angle of the template. To improve the harmony of the image, it is necessary to transfer the hue value p outside the color harmony template into the color template. H'ðpÞ ¼ CðpÞ ¼
w ð1 Gr kHðpÞ CðpÞkÞ 2
ð5Þ
Among them, C (P) corresponds to the weighted central hue value of the corresponding region in the color template; G r ll l represents the Gaussian distribution with a standard deviation of r; W is the length of the corresponding color template interval, generally, r is w/2. To avoid the simple linear shift of hue, the algorithm uses Gaussian distribution to adjust the hue.
Color Transfer Algorithm of Interior Design
1765
6 Conclusions Because of the strong regional characteristics of interior design, this paper proposes an interior design color transfer algorithm based on topological information region matching. Compared with the global algorithm, this algorithm achieves better results in the retention of color richness. 1) According to the characteristics of obvious color difference between interior design areas, the topological information of each region is analyzed by region division to guide region matching and improve the accuracy of color transfer algorithm; 2) To improve the integrity of the color transfer algorithm, a color adjustment method based on maintaining the relativity of adjacent region colors is proposed; 3) Because of the characteristics of the noise produced in the process of color transfer and the color of small areas in the original image that are difficult to adjust, the color harmony algorithm is applied to optimize and adjust the colors of these regions, to enhance the harmony degree of the color transfer algorithm. The disadvantage of this algorithm is that it is limited by the color transfer algorithm, and the detail transfer is not accurate enough, especially for the color with texture area, which is the problem to be solved in the future.
References 1. Reinhard, E., Ashikhmin, M., Bruce, G., et al.: Color transfer between images. IEEE Comput. Graph. Appl. 21(5), 34–41 (2001) 2. Wang Shiliang, X., Gang, C.Q., et al.: Color transfer with multiple parameters by combining the scaling and mean values [3]. J. Image Graph. 18(11), 1536–1541 (2013). (in Chinese) 3. Guoying, Z., Shiming, X., Hua, L.: Application of higher moments in color transfer between images. J. Comput.-Aided Des. Comput. Graph. 16(1), 62–66 (2004). (in Chinese)
System Construction Model of Legal Service Evaluation Platform Based on Bayesian Algorithm Yanhong Wu(&) Basic Teaching Department, Shandong Huayu University of Technology, Shandong 253034, China [email protected]
Abstract. First this paper constructs a public legal service platform suitable for rural residents in Dezhou. This legal service platform is mainly composed of online platform, offline platform and service personnel. The purpose is to popularize legal knowledge to rural residents in various forms. Through the operation of the platform, “law” can be introduced into thousands of households and ordinary families, so that villagers can know the law and understand the law, avoid breaking the law without knowing it, and thus enhance the legal awareness of rural residents. Secondly, a reasonable evaluation system is established to evaluate the operation effect of the service platform, and the service platform is evaluated and improved continuously according to the evaluation results. Keywords: Legal consciousness Legal service platform Evaluation system
1 Introduction Legal consciousness is a special social consciousness [1–4]. In recent years, with the deepening of the Country’s law popularization, more and more activities have been carried out in rural areas. Nearly two years the Country puts forward the policy of one village one legal adviser. The implementation of these works and policies, to a large extent, has improved the rural residents’ knowledge and understanding of the law. However, due to the low education level of some residents, outdated ideas, the law in the rural lack of due authority, some of the leading cadres in the countryside did not play an exemplary role, many rural residents are not very law-conscious. The weak legal consciousness [5] has become the bottleneck that restricts the improvement of the residents’ legal quality and affects the development and stability of the countryside. This paper aims to build a public legal service platform in line with the legal awareness of rural residents in Dezhou so that villagers can have legal recourse in the face of disputes. At the same time, let the villagers know all kinds of legal knowledge, to avoid breaking the law without knowing, to build a good legal environment for rural revitalization.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1766–1771, 2021. https://doi.org/10.1007/978-981-33-4572-0_263
System Construction Model of Legal Service Evaluation Platform
1767
2 Establishment of “Online+Offline” Public Legal Service Platform for Rural Residents
3 The Construction of Multi-level Comprehensive Evaluation System Fuzzy comprehensive evaluation method is a comprehensive evaluation method based on fuzzy mathematics. The comprehensive evaluation method is based on the model. The membership theory of fuzzy mathematics transforms qualitative evaluation into quantitative evaluation, that is to say, fuzzy mathematics makes an overall evaluation of things or objects restricted by various factors. It has the characteristics of clear results and strong systematicness. It can solve fuzzy and difficult quantified problems well. It is suitable for solving all kinds of non-deterministic problems. Since it is difficult to quantify many indicators to judge the quality of public legal service platform, fuzzy comprehensive evaluation method is selected to evaluate it, and analytic hierarchy process is selected to obtain the weight in order to give more objective and reasonable weight. The establishment of the public legal service platform aims to build a platform for rural residents to understand laws and regulations, learn legal knowledge and solve legal disputes. The rationality of the legal service platform directly affects residents’ legal literacy level and attitude to solve legal disputes. Therefore, it is particularly
1768
Y. Wu
important to establish and perfect the evaluation mechanism of rural residents’ legal services under the background of “rule of law”. Firstly, the evaluation index system of public legal service for rural residents is considered. Based on the conclusions of the previous survey report and the results of in-depth interviews with residents in rural areas, the following rural legal service platform evaluation system is established. (1) Determination of evaluation index weight based on analytic hierarchy process. The rural public legal service platform is set as the target level, the three aspects of use experience, technology experience and information experience are set as the criterion level, and the eight aspects of mass satisfaction are set as the plan level. The analytic hierarchy process is used to calculate the weight of evaluation indicators at all levels. The evaluation matrix A is constructed, and the influence of the factors on the previous level is obtained by comparing the factors on the same level. The first step is to determine2the index 3 weight of the first level factor index. First 1 12 14 determine the matrix B1 ¼ 4 2 1 12 5, The maximum eigenvalue kmax ¼ 3 is 4 2 1 calculated by MATLAB software. The consistency index is calculated by the n formula CI ¼ kmax n1 , The random consistency index RI ¼ 0:90 and consistency ratio index CR ¼ CI RI ¼ 0\0:10 are obtained by looking up the table, then it can be considered that the consistency of the judgment matrix is acceptable, and the weight of each factor A ¼ ð0:1208; 0:5352; 0:535Þ is calculated through the maximum eigenvalue. The second step is to determine2the index weight of secondary factor 3 2 indicators. 3 1 12 14 1 12 13 1 1 3 First determine the matrix A1 ¼ 4 2 1 12 5; A2 ¼ ; A1 ¼ 4 2 1 12 5. 3 1 4 2 1 3 2 1 The maximum eigenvalues kmax ¼ 3; kmax ¼ 2; kmax ¼ 3:0092 are calculated by MATLAB software. The consistency indexes are calculated by the formula n kmax n CI ¼ kmax n1 ¼ 0, CI ¼ n1 ¼ 0:0046. The random consistency index RI ¼ 0:90 can be obtained by looking up the table, and the consistency ratio index can be obtained as follows: CI CI CI ¼ CI RI ¼ 0\0:10;CR ¼ RI ¼ 0\0:10; CR ¼ RI ¼ 0\0:051\0:10; then the consistency of the judgment matrix can be considered acceptable, and the weight vector of the second-order index. B1 ¼ ð0:1429; 0:2857; 0:5714Þ; B2 ¼ ð0:2495; 0:7505Þ; B3 ¼ ð0:1615; 0:3089; 0:5296Þ can be calculated by the maximum eigenvalue.
System Construction Model of Legal Service Evaluation Platform
1769
(2) Fuzzy comprehensive evaluation of rural legal service platform. Based on the development of the rural legal service platform in Dezhou, this paper conducted a democratic comprehensive evaluation of the service platform, and took 100 members of a village committee and department leaders as the evaluation group to score and evaluate various indicators of the platform, whose values are shown in Table 1. Table 1. Evaluation panel’s comprehensive evaluation of rural legal service platform Factors
Level Very satisfied Public satisfaction 0.1 Functional availability 0.2 Acceptance of general law 0.1 Ease of operation 0.1 Technical perfection 0 Timeliness of information 0 Information timeliness 0.2 Information richness 0.3
Satisfied 0.4 0.2 0.5 0.4 0.2 0.3 0.3 0.3
Quite satisfied Pass 0.5 0 0.6 0 0.3 0.1 0.3 0.2 0.4 0.4 0.4 0.2 0.4 0.1 0.4 0
Not satisfied 0 0 0 0 0 0.1 0 0
The evaluation matrix of each layer is as follows 2
0:1 R1 ¼ 4 0:2 0:1 R2 ¼ 2
0:1 0
0 R3 ¼ 4 0:2 0:3
0:4 0:2 0:5
0:5 0:6 0:3
0 0 0:1
3 0 05 0
0:4 0:2
0:3 0:4
0:2 0:4
0 0
0:3 0:3 0:3
0:4 0:4 0:4
0:2 0:1 0
3 0:1 0 5 0
According to the evaluation results, Zadeh operator (large and small) is used to synthesize the evaluation vector B with the evaluation matrix R. Then it gets the comprehensive evaluation vector X ¼ B R. The vector and evaluation matrix constituted by the weight of each index of secondary factors are substituted into the formula to obtain: 2
0:1 X1 ¼ B1 R1 ¼ ð0:1429; 0:2857; 0:5714Þ 4 0:2 0:1
0:4 0:2 0:5
¼ ð0:1429; 0:2857; 0:5; 0:1429; 0:1429Þ
0:5 0:6 0:3
0 0 0:1
3 0 05 0
1770
Y. Wu
0:1 X2 ¼ B2 R2 ¼ ð0:2465; 0:7505Þ 0
0:4 0:2
0:3 0:4
0:2 0:4
0:1 0
¼ ð0:2429; 0:4; 0:3; 0:1429; 0:1429Þ 2
3 0 0:3 0:4 0:2 0:1 X3 ¼ B3 R3 ¼ ð0:1615; 0:3089Þ 4 0:2 0:3 0:4 0:1 0 5 0:3 0:3 0:4 0 0 ¼ ð0:1605; 0:3; 0:4; 0:2; 01615Þ The fuzzy comprehensive evaluation matrix of rural legal service platform can be obtained as follows: 2
0:1429 R ¼ 4 0:2495 0:1615
0:2857 0:5 0:4 0:3 0:3 0:4
0:1429 0:2459 0:2
3 0:1429 0:2495 5 0:1615
(3) Comprehensive quality evaluation results of rural legal service platform The vector of comprehensive quality evaluation of rural legal service platform 3is 2 0:1429 0:2857 0:5 0:1429 0:1439 6 7 X ¼ A R ¼ ð0:1208; 0:5352; 0:3440Þ 4 0:2495 0:4 0:3 0:2495 0:2495 5 0:1615 0:3 0:4 0:2 0:1615 ¼ ð0:1429; 0:2857; 0:4; 0:1429; 0:1429Þ According to the principle of maximum subordination, the final comprehensive quality evaluation result of the rural legal service platform is quite satisfied. This model can more objectively evaluate the established legal services platform, through the evaluation can have an objective understanding about platform, according to the evaluation results can be targeted to improve the platform, the evaluation result is satisfactory, that remains to be improved in three aspects, through a series of methods to identify users are not satisfied with this improvement, in order to make the platform and residents demand more and more matches.
4 Conclusion On the basis of the in-depth investigation, which is suitable for rural residents to the city to build public legal services platform, the platform on the basis of the original legal services platform, full use of big data, intelligent means of modern information, such as the legal knowledge to rural residents in time, for them to solve all kinds of legal issues, legal disputes. At the same time, in order to test the service quality of the service platform, a multi-level comprehensive evaluation system is established in this paper. Through the evaluation model, the service platform is evaluated comprehensively in a timely manner, and the deficiencies of the platform are found out through the evaluation results, so as to improve it.
System Construction Model of Legal Service Evaluation Platform
1771
References 1. Encyclopedia of China, Law volume 2. Zongling, S.: Jurisprudence. Higher Education Press, Beijing (1994) 3. Jianxin, M.: On the cultivation path of farmers’ modern legal consciousness. Dalian J. Cadres (1) 4. Chunyan, C., Chunlei, L.: A brief analysis of farmers’ legal consciousness. Soc. Law (8) (2013) 5. Zeyi, M.: Analysis of the current situation and countermeasures of farmers’ legal consciousness. Reform Dev. (6) (2012)
Application of Random Simulation Algorithm in Physical Education Evaluation Huang Hong(&) Jingdezhen Ceramic University, Jingdezhen 333403, China [email protected]
Abstract. With the development of computer network technology, network education has emerged. The traditional physical education has a strong practicality, which is an interactive activity between teachers and students. At the same time, physical education teaching also has a strong demonstration and imitation. Based on these characteristics, the application of multimedia network teaching in physical education is inevitable. The multimedia network teaching has broken through the traditional physical education teaching mode, has supplemented the traditional physical education teaching insufficiency. Keywords: Sports Multimedia teaching Sports teaching
Network teaching Multimedia network
1 Characteristics of Physical Education in Colleges and Universities Physical education in Colleges and universities, like other subjects, is a planned, purposeful and organized process of imparting students’ knowledge and cultivating students’ skills. Of course, as an independent discipline, physical education and other teaching activities also have obvious differences. Other disciplines pay more attention to the cultivation of students’ intelligence and psychology, while physical education pays more attention to the cultivation of students’ physical strength and skills. Physical education is to teach students the theoretical knowledge and technical skills of physical education. Physical education teaching activities are usually carried out in outdoor places such as gymnasium or playground. When necessary, professional equipment is also needed, and at the same time, it is also affected and restricted by the environment. So in the process of physical education teaching, its teaching organization is more complex and diverse.
2 Problems in Traditional Physical Education Teaching 2.1
The Teaching Scope is Relatively Narrow
In school physical education, the traditional physical education mainly set up basketball, football, badminton, table tennis, long-distance running, sprint, long jump and other common sports. With the development of science and technology, many new © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1772–1775, 2021. https://doi.org/10.1007/978-981-33-4572-0_264
Application of Random Simulation Algorithm in Physical Education Evaluation
1773
sports are loved by college students. The traditional physical education settings are obviously not wide enough to meet the college students’ interest in sports. 2.2
Lack of Connection Between Classroom Learning and After Class exercise
In many physical education classes in Colleges and universities, the content taught is often just to complete the term task, not fully considering the specific situation of students, not to combine the extracurricular exercise to develop appropriate exercise methods for students. So at this point, many colleges and universities do not pay attention to it, nor form systematic theoretical guidance. 2.3
Less Teaching Hours
The current physical education curriculum in Colleges and universities is usually one class a week and two class hours. For most students, two class hours obviously can not meet their needs of training and learning. 2.4
Limited by Location and Environment
Most of the physical education is in the open playground. Only a few can be carried out in the school gymnasium. If there is bad weather such as rain, physical education will fall into an awkward situation, not only can not achieve good results, but also may cause many problems [1]. The above are the most important problems in the traditional physical education teaching. In addition, there are a series of other problems. For example, in the physical education teaching, some high-difficulty action teachers are not able to repeat and accurately complete, nor are they able to decompose the actions so that students can fully understand and master them.
3 Advantages of Multimedia Network Teaching in Physical Education 3.1
More Conducive To Communication Between Teachers and Students
The timely communication between teachers and students is conducive to the smooth progress of physical education. The traditional physical education teaching mode takes the teaching class as the unit, there are certain limits in the number of students, and the communication between students and teachers is also limited, because teachers are one to many in the process of teaching, so it is impossible to cover all aspects. However, the multimedia network teaching platform provides the possibility of online communication for students and teachers. Through the technical support of multimedia network teaching, students and teachers can realize remote communication. The interactive teaching of both sides enables them to communicate better, so as to improve the teaching efficiency of teachers and students’ learning efficiency.
1774
3.2
H. Hong
Multimedia Network Teaching Platform Enables Students to Choose Appropriate Courses and Schedule According to their Own Situation
Teachers are usually the main body of traditional physical education, students in the classroom must take teachers as the center, follow the progress of teachers to learn and exercise. Because of the limitation of the number of students and the time of class, it is very difficult to carry out the individualized teaching in college physical education, which makes it difficult for students to study independently and individualized. Through the application of Sports Multimedia Network Teaching Platform in sports teaching, students can learn independently and choose the content and schedule in the teaching resource database, which makes them break through the limitation of time and space in traditional sports teaching, and realize personalized teaching in a real sense, with students as the main body. At the same time, students can establish network interpersonal relationship on their own computer equipment through the multi-media network teaching platform to communicate and discuss together, break the shackles of traditional physical education teaching, and carry out comprehensive physical education knowledge learning [2]. 3.3
The Multi-media Network Teaching Platform can Share and Optimize the Physical Education Teaching Resources in Colleges and Universities
The application of multimedia network teaching in physical education makes the information resources of college physical education be shared and optimized, which is a reform and innovation of traditional physical education. The multi-media network teaching platform provides a database of information resources for university physical education, which collects information from universities, scientific research institutes and libraries all over the world. Because there are many different types of network physical education resources, including sports news, various statistical data of physical education, sports research library, etc., this is a huge resource database, students can freely obtain it through the multimedia network teaching platform. At the same time, on the Internet, there are a large number of teaching contents, teaching methods and how to carry out the construction of physical education and other knowledge. Students can independently and freely choose to study or communicate with others according to their own needs, so as to realize the sharing of resources.
4 Some Problems in the Application of Multimedia Network Teaching in Physical Education 4.1
Physical Education Teachers Should Get Rid of the Shackles Of Traditional Physical Education and Master the Concept and Skills of Multimedia Network Teaching
University PE teachers have differences in their cultural, ideological and ability literacy. Different teachers have different attitudes when using multimedia network
Application of Random Simulation Algorithm in Physical Education Evaluation
1775
teaching in traditional PE teaching. Most of the young teachers who are active in the line support this kind of reform and innovation. But some middle-aged and old teachers do not agree with this. These teachers have decades of teaching experience and rich teaching experience. They have been used to the traditional teaching mode, so they are not willing to give up the traditional teaching methods, which makes the traditional college physical education reform into an awkward situation. In addition, in addition to the design technology of general multimedia network teaching website, it is also necessary to show the characteristics of sports through multimedia network technology, explain the key points of technical actions in detail, and help students learn and master sports skills more intuitively. However, although some PE teachers do not have great problems in their PE professional knowledge, there are many difficulties in learning and mastering multimedia network technology [3].
5 Some Problems in the Application of Multimedia Network Teaching in Physical Education At present, the resources of physical education multimedia network teaching, such as online material database, multimedia network courseware, network course and so on, are obviously insufficient, and the quality is not high. Even the most commonly used physical education theory teaching is rare on the multimedia network platform. Let alone in the teaching of multimedia network application production, multimedia network courseware and so on. And answering questions can not meet the needs of students in multimedia network teaching. The efficiency of students’ autonomous learning is reduced, and the learning effect is not ideal. Since it is multi-media network teaching, in the process of teaching, it is naturally inseparable from computer, projector, audio and other multi-media equipment, and most of these hardware facilities are installed in fixed teaching places such as multimedia classrooms. Therefore, sports multimedia network teaching has to choose in these multimedia classrooms. This makes the teaching space and teaching equipment become the limitation of multimedia teaching. At the same time, teaching in gymnasium and multimedia classroom is also restricted by school management system. If PE teachers need to use multimedia classrooms, they must submit applications in advance. This process is very cumbersome, time-consuming and energy consuming, so some teachers do not want to take multimedia network courses for teaching.
References 1. Xiaojuan, W.: Research on the application of network education technology in physical education teaching. New West (11) (2010) 2. Song, J.: On the integration of physical education and network education. Anhui Sports Sci. Technol. (03) (2003) 3. Rong, C., Hao, X., Jian, W., Jianping, M.: The current situation and Prospect of multimedia and network assisted teaching in college physical education. J. Beijing Univ. Phys. Educ. (02) (2004)
The Research on Comprehensive Query Platform for Smart Cities Building Zhicao Xu(&) The Third Research Institute, Ministry of Public Security, Shanghai 201204, China [email protected]
Abstract. The study mainly introduces the comprehensive query platform based on big data technology, including the architecture and components. The architecture is mentioned for displaying the elements and relationship of the platform, such as information resource layer, data query layer and service support layer. The platform consists of query engine and query application, which lists its details and functions. The comprehensive query platform provides unified query accessing interfaces for various queries of heterogeneous data by query engine. In addition, the interface design participates a significant role in the unified query accessing interface to provide a more professional service. Furthermore, the query platform replaces the traditional query method and optimize the process, which can increase the efficiency of management for smart city building. Keywords: Query platform
Smart city Unified interfaces
1 Introduction Currently, there is an increasing number of government departments, enterprises and institutions have gradually adopted digital information management in the city, which is to constantly meet the needs of smart city building to offer a better public service [1]. However, the data gathered from different sources, among different government departments, enterprises and public institutions, which follows non-standard and nonuniform management standards [2]. Meanwhile, the data is complex as well, including structured, semi-structured and unstructured, which also has increased the difficulty of data construction and management. Thus, higher requirements have been put forward for data platform service, and it is necessary to build a data query system. The study will focus on building a data query platform to optimize the retrieval and increase the working efficiency, which can provide a better service for related staffs. Firstly, it will figure the architecture of query platform, which shows the working principle and process. Then, the platform will be divided into comprehensive query engine and application to display its components. Finally, the study provides the interface design to retrieve data more specifically.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1776–1781, 2021. https://doi.org/10.1007/978-981-33-4572-0_265
The Research on Comprehensive Query Platform
1777
2 The Architecture of query platform From the architecture design based on big data [3–7], it inspired to create this architecture of query platform. The comprehensive query platform provides unified query accessing interfaces for various queries of heterogeneous data by query engine, such as basic data, objectified data and business thematic data. Moreover, the query platform can be allocated into three layers, including information resource layer, data query layer and service support layer. The architecture of platform will be shown in the Fig. 1.
Fig. 1. The architecture of query platform
1. Information resource layer: The information resource layer is the basic stage of the platform for analyzing, which contains data access, data processing and data classification. Accessing all kinds of raw data, including people, vehicle, things and organization, through data analyzing, it can form object database, thematic database and knowledge database. Meanwhile, raw data acquired from the front-end devices can become unified and be stored into these three databases, which can benefit for the security of data. 2. Data query layer: The data query layer is as a core portion of this query platform. Furthermore, the distributed query can provide a high-performance distributed query to support service layer by specific approaches, such as query distribution, query optimization, index labels, hot data caching and data storing based query services.
1778
Z. Xu
3. Service support layer: The service support layer is implemented in actual applications at the user level. Besides, following the precise query, fuzzy query, combined query and range query these four query rules to realize better application services, such as data aggregation, data correlation, data statistic, data sorting and batched query.
3 The Component of Query Platform See (Fig. 2).
Fig. 2. The component of query platform
3.1
The Comprehensive Query Engine
1. Content recognition components:According to the query rules, it can intelligently identify the content and provide users a quick recommendation of different operation advices. The identification rules mainly contain mobile phone number, ID card number, passport number, email, IP address, vehicle license number and others. 2. Objectified query components:Providing a quick query for people, places, things, objects and organization objects to finish the object file query, multi-dimensional filter, objective label processing and object association relation query. And, the key words chiefly involve mobile phone number, ID card number, passport number, email, IP address, vehicle license number and others as well. 3. Combined query components:The combined query supports setting the range of query objects, which is based on the time range and data range. In addition, it provides the precise search, fuzzy search, keyword combined search and the secondary search. At the same time, the precise search is also a search method in which the search term is exactly the same as a field of the resource library, and it supports search and matching of different data resources in the different search fields. As for the single-condition or multi-condition search, they both support the post-fuzzy search. For address and place name, the forward and backward fuzzy search is supported as well. Meanwhile, search results are presented in a list, which means
The Research on Comprehensive Query Platform
1779
that a certain object can be selected in the result list for the precise search. Moreover, the keyword combined query supports users to use keywords of ‘and’, ‘or’ and non-logical combination. Meanwhile, it can even support the use of special characters, such as ‘?’ and ‘*’. The secondary search supports the secondary screening of the query result set, with the conditions of time range, data resource, object type, event label information and others. Besides, the corresponding data resource can be selected for the secondary search basing on specific attributes. 4. Batched query components:Firstly, offering a whole-network data focused batched query. Then, the system provides standardized batched query template, and the content temple includes query content classification, query specific content, fuzzy query rules and others. In addition, the system also supports data source range, time range and other conditions. Providing the function of batched query task management to support the results of data statistics and set components of query and export. 3.2
The Comprehensive Query Application
1. One-key search: It provides an easy-to-use one-stop search method, with classifying the types of commonly searching keywords selected. Then, users only need to type keywords and confirm them to achieve retrieval and display the relevance of the result data. Besides, it mainly includes the basic information, correlation, behavioral trajectory, case-involved information and information statistics. 2. Elements search: The element search includes face comparison, image search by image and keyword search, among which the face comparison uses ID card images and entry-exit photos and other data resources for modeling to build a static face comparison library for the whole city. The library contains permanent population, temporary residents, criminals, key personnel, fugitives and other face characteristics database. By uploading images, calculating features and comparing them with the face feature library, the image information will be displayed with the similarity from high to low. Besides, the image search uses the similar image retrieval and image content retrieval to retrieve all identical or similar images based on the original picture. Furthermore, the keyword search for image feature tags by using keywords to display the image data information of the object image. 3. Full-text search: The aim of full-text search is not limited to the specific fields, and the search range of both unified structured and unstructured data is broader as well, which is more suitable for the characteristics of heterogeneous fusion data content and variable format search. In addition, the full-text search allows using one or a group of keywords. Besides, it supports the combination of keywords, including ‘and’, ‘or’, and non-combined rule search methods. Moreover, the search results support highlighting the target keywords and offer the search results for the secondary query. 4. Classification search: With regard to the classification search, it supports both single data source and multiple data sources to search. Then, it can be expanded from the data resources accessing, meanwhile, the search condition items can also be adjusted by the specific data sources. The single search supports personalized query
1780
Z. Xu
conditions to search by using specific data resources. The combined search supports the multi-condition combination queries, by selecting and searching multiple data resources. 3.3
The Interface Design
See (Table 1). Table 1. The interface design form Interface names Social information query Track information query Tagged personnel query Case information query Vehicle information query Vehicle illegal records query Event information acquisition Personnel search Event search Item search Location search Organization search Face comparison Image keyword search …
Interface form Web service Web service Web service Web service Web service Web service Web service
Description
Web service Web service Web service Web service Web service Web service Web service …
Sending the search parameters passed by the request to the search engine for personnel search Sending the search parameters passed by the request to the search engine for event search Sending the search parameters passed by the request to the search engine for item search Sending the search parameters passed by the request to the search engine for location search Sending the search parameters passed by the request to the search engine to organize the search Getting personnel information based on parameters, such as image information and similarity threshold From the image keyword search, it can get the image information of the corresponding label …
According to the ID number, it can get all the social information data of the object in the resource database According to the ID number, it can get all the track information of object within a given time and type rang According to the ID number, it can find whether the object is in the database of key suspects According to the ID number, it can get all case information of object According to the ID number or vehicle license number, it can get all the vehicle information According to the vehicle license number, it can get all the vehicle illegal records According to the event number and name, it can get the specific event information
The Research on Comprehensive Query Platform
1781
4 Conclusions The study mainly shows the architecture and the components of the comprehensive query platform to point out the benefits of platform. The architecture of query platform contains information resource layer, data query layer and service support layer these three layers, which can help to have a command of the query platform. Then, it shows the components of both query engine and query application, which can display the elements and functions. From the interface design form, people can retrieve data more specifically [9], which addresses the issues of non-standard and non-uniform data. Moreover, it can also provide some customized service for specific business needs. Integrating all advanced technology and resources in an intelligent system, which is the core step for smart cities building [10]. In order to achieve it, the query platform can provide solutions for business depth requirements, especially for criminal investigation cases, traffic management and daily security service. Acknowledgements. This study is supported by the National Key R&D Project of China (No. 2018YFC0809704).
References 1. Dameri, P.R.: Searching for smart city definition: a comprehensive proposal. Int. J. Comput. Technol. 11(5), 2544–2551 (2013) 2. Cheng, B., Longo, S., Cirillo, F., Bauer, M., Kovacs, E.: Building a big data platform for smart cities: experience and lessons from Santander. In: IEEE International Congress on Big Data (2015) 3. Dolenc, M., Katranuschkov, P., Gehre, A., Kurowski, K., Turk, Z.: The InteliGrid platform for virtual organizations interoperability. J. Inf. Technol. Constr. 12, 459–477 (2007) 4. Gu, Y., Jiang, H., Zhang, Y., Zhang, J., Gao, T., Muljadi, E.: Knowledge discovery for smart grid operation, control, and situation awareness – a big data visualization platform. In: North American Power Symposium (2016) 5. Geng, D., Zhang, C., Xia, C., Xia, X., Liu, Q., Fu, X.: Big data-based improved data acquisition and storage system for designing industrial data platform (2019) 6. Lachhab, F., Essaaidi, M., Bakhouya, M., Ouladsine, R.: Towards a context-aware platform for complex and stream event processing. In: International Conference on High Performance Computing and Simulation (2016) 7. Barakhtenko, E., Sokolov, D.: An architecture of the technology platform for computer modeling, design, and optimization of intelligent integrated energy systems. In: International Multi-Conference on Industrial Engineering and Modern Technologies (2019) 8. Li, J., He, S., Yin, W.: The Study of pallet pooling information platform based on cloud computing. In: Scientific Programming (2018) 9. Oulasvirta, A.: User interface design with combinatorial optimization. Computer 50(1), 40– 47 (2017) 10. Albino, V., Berardi, U., Dangelico, M.R.: Smart cities: definitions, dimensions, performance, and initiatives. J. Urban Technol. 22(1), 3–21 (2015)
Construction and Application of Public Security Visual Command and Dispatch System Jiameng Zhang(&) The Third Research Institute, Ministry of Public Security, Shanghai 201204, China [email protected] Abstract. The visual command and dispatch in the field of public security refers to the comprehensive utilization of existing network technology, computer technology, multimedia technology and other information technologies, based on resource databases, and fusion of business systems such as PGIS application systems [1], emergency resource systems [2], and emergency plan systems [3]. This article starts from the overall architecture of the public security visual command and dispatch system, and designs the system’s resource access module, resource management module, intelligent analysis module, visual display module, and command dispatch module. The system realizes the command and dispatch functions of collecting and analyzing emergency data, organizing, coordinating, and controlling resources, helping public security to carry out business work quickly and efficiently, and has practical application value. Keywords: Command and dispatch
Visualization Public security
1 Introduction At present, social activities are becoming more frequent and the Internet is expanding and becoming more complicated. While promoting urban development, potential safety hazards continue to rise. Higher requirements are put forward for public security business work. Public security command and dispatch work has been facing many difficulties and challenges, which are manifested in emergencies, multiple scenarios and complexities, changeable processes, complicated systems, and huge data. In order to achieve the command and dispatch goal of “visible on site, knowable situation, reachable instructions, and controllable situation”, we have designed and constructed a visual command and dispatch system for efficient collaborative operations. The system integrates heterogeneous data and other information resources, integrates public security business systems, analyzes abnormal characteristics, and realizes command and dispatch. Maximize the application effectiveness of resource sharing and flat command capabilities, and provides effective guarantee for the construction of public security information.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1782–1786, 2021. https://doi.org/10.1007/978-981-33-4572-0_266
Construction and Application of Public Security Visual Command
1783
2 The Overall System Architecture The public security visual command and dispatch system is divided into five layers: data resource layer, data aggregation layer, application support layer, application function layer and application display layer [4, 5]. The overall system architecture is shown in Fig. 1.
Fig. 1. The overall system architecture
1. Data resource layer: The data resource layer is the foundation of the entire system. Mainly through the docking of the alarm handling system, PGIS application system, emergency plan system, emergency resource system, video networking platform and other operation systems, using data loading, association integration, field renaming, data coupling, circular reference, creating embedded table, and merging heterogeneous data, etc., generate a single theme database or multiple themes databases. 2. Data aggregation layer: According to the elements of “person, place, thing, event, organization”, the data aggregation layer gathers and integrates emergency resources such as personnel, police information, vehicles, objects, equipment, telephone numbers, etc., to form various thematic databases, such as police information database, emergency resource database, and view resource library, etc. This layer provides data support for building application models. 3. Application support layer: The application support layer provides a shared resource service platform (such as user management, identity authentication, access management, operation management and directory services) and a multimedia integrated dispatch platform (provide information interaction such as video, voice, GPS
1784
J. Zhang
positioning [6]) to support software data integration. This layer provides guarantee for the efficient and reliable operation of the system. 4. Application function layer: The application function layer mainly builds main functional modules such as police report, dynamic patrol, command and dispatch, plan management, police situation research and judgment, assessment statistics, system management, video library, etc. The system can quickly expand other business functions as needed. 5. Application display layer: The application display layer includes page display functions such as query application, research and judgment analysis, display on the map, command on the map, assessment and statistics. This layer provides services for carrying out daily work.
3 System Module Design 3.1
Resource Access Module
The public security visual command and dispatch system needs to integrate other business systems and related business data, such as the police information data of the 110 reception and handling system, mobile short message platform, GPS monitoring system [7], etc. Due to the large differences between different business systems, the data is heterogeneous, and the docking method is not uniform. By introducing an access gateway, it is equivalent to a business middleware to shield the differences between external business systems and data heterogeneity. The public security visual command and dispatch system and the service gateway are connected through the RMI protocol and the REST interface. The two agree on the connection protocol and the connection method to ensure the unity of data. 3.2
Resource Management Module
The public security visual command and dispatch system includes a large number of data resources such as police data, bayonet data, GPS data, and monitoring point data. The database service stores and manages all data that needs to be maintained and used in the system, and provides data association, query, modification and Delete, support regular data backup, support abnormal data recovery based on database backup, support large data storage, and adopt optimized storage and query strategies for large data. Through the establishment of independent databases, the data is stored separately, with a clear hierarchy, which improves operability and facilitates maintenance. 3.3
Intelligent Analysis Module
In the Intelligent analysis module, through the use of intelligent applications of various data and information resources of the system, face detection and recognition, abnormal behavior detection and recognition, vehicle big data analysis and active deployment and control, sensitive target tracking, etc. can be realized. From the analysis results of these applications, the risks that may appear in the public security business can be
Construction and Application of Public Security Visual Command
1785
explored, and the system can actively, quickly and accurately analyze and discover sensitive targets and behaviors. 3.4
Visual Display Module
The visual display module includes the dynamic display of GIS map [8], police situation, police position, vehicle distribution, etc., as well as the dynamic visualization of police handling and case response during the command and dispatch process. This module supports access, editing and scheduling of video resources, and support for access multiple cameras, one-click on the wall, support for real-time zooming, enhancement, and color correction of video images. 3.5
Command and Dispatch Module
In the command and display module, through the command and dispatch system to issue instructions and orders to district/county command centers or police stations, various departments at higher and lower levels, leaders at all levels and subordinates can conduct two-way communication in real time. There can be a variety of command and dispatch methods, including point-to-point command and dispatch, point-tomultipoint command and dispatch, multi-group command and dispatch, and insert command and dispatch. This module realizes the command and coordination of multiparty forces to handle together, provides strong support for problem solving, greatly saves time and improves work efficiency.
4 System Applications The public security visual command and dispatch system integrates heterogeneous video and other information resources to maximize the application effectiveness of resource sharing and flat command capabilities. The system mainly has the following applications: 1. The system relies on GIS to delineate patrol areas for patrolling police, and set the on-duty time, allowable range and time of off-post, support automatic alarm for offpost, and realize the visual and flat service assessment function. 2. The system provides an interface for starting the emergency command and dispatch service. In the event of a major alarm situation, the police can start emergency command and dispatching with one-click through the 110 receiving police system. 3. Through real-time GPS data of police officers and police cars, commanders can grasp the status of front-line police and the distribution of police forces in real time. 4. Through the docking of the third-party short message service interface [9] and voice call interface, the system can realize the group sending of short messages and oneclick alarm notification. 5. Combined with video surveillance [10], through real-time video or video playback, the system can grasp the situation of the crime scene in real time.
1786
J. Zhang
6. Through the bayonet vehicle deployment control, the suspected car is deployed throughout the city. After the snapshot is compared, the system automatically informs the user to realize the intelligent comparison.
5 Conclusions In this study, we combine the actual needs of the command and dispatch work of the public security organs, and aims at the shortcomings in the work, proposes the construction of a public security visual command and dispatch system. The system realizes intelligent, visual, and three-dimensional command and dispatch, improves on-site handling and command support capabilities, improves emergency response and resource management, and promotes the further construction of public security information. Acknowledgements. This work is supported by National Key R&D Program of China. (No. 2018YFB1004605).
References 1. Qiao, Q.: Design and implementation of police command and control system based on GIS. National University of Defense Technology (2012) 2. Lei, Y., Bu, F.: The design and realization of emergency visual command and dispatch communication platform. In: International Conference on Information Science and Control Engineering (2018) 3. Wang, Y., Wu, Z.Y., Wang, Y.: City management and dispath system based on GIS and video surveillance. Comput. Eng. Des. 37(7), 1975–1981 (2016) 4. Chen, X.: design and implementation of multi-mode integrated command and control system framework. Geospatial Inf. (2019) 5. Gu, Y.J., Zhang, C., Li, L.Z.: Research on the architecture of emergency command system based on internet of things. Proc. Third China Command Control Conf. 1, 51–54 (2015) 6. Wang, L., Zhou, Z.M., Li, Y.L., Yi, J.: Research on police dispatching system based on Beidou positioning technology. Sci. Technol. Innov. Appl. 2016(32), 55–56 (2016) 7. Liu, D., Tian, Y.Z., Cao, H.J., Zhang, J.B.: Application of Beidou navigation and “Cloud +End” technology in public security command and dispatching. Bull. Surveying Mapp. 2013(12), 74–77 (2013) 8. Li, R.M., Lu, H.P.: Intelligent traffic management command and dispatch system based on WebGIS. Comput. Eng. (2007) 9. Jiang, J.A.: Application of 4G mobile communication technology in fire emergency command system. Public Commun. Sci. Technol. 8(10), 78–79 (2016) 10. Hou, Z.Q., Hu, R.M., Wang, Z.Y.: Application and bottleneck of mobile multimedia technology in emergency command system. Electroacoust. Technol. 35(11), 49–53 (2011)
Research on Dynamic Resource Scheduling Technology of Dispatching and Control Cloud Platform Based on Container Dong Liu(&), Yunhao Huang, Jiaqi Wang, Wenyue Xia, Dapeng Li, and Qiong Feng Beijing Key Laboratory of Research and System Evaluation of Power Dispatching Automation Technology, State Key Laboratory of Power Grid Security and Energy Saving, China Electric Power Research Institute, Haidian District, Beijing 100192, China [email protected]
Abstract. With the continuous advancement of dispatching and control cloud, traditional platform systems and virtualization technologies have been well supported in resource scheduling, flexible scaling, rapid migration deployment, the future availability and maintainability of these business systems will face enormous challenges. The rapid development of Docker container technology has solved the shortcomings of traditional PaaS platform based on virtual machine. In this paper, Docker container technology is used to build PaaS platform services of dispatching and control cloud. Through the analysis and verification of the flexible scaling, resource scheduling, container response and other performance aspects of the container, the experiment shows that the Docker-based PaaS platform has a good effect in the rational scheduling, rapid response and flexible scaling of resources. Keywords: Dispatching and control cloud PaaS platform container Resource scheduling Elastic telescoping
Docker
1 Introduction During the 13th Five Year Plan period, China’s economic development has entered the “new normal”. The operation characteristics of UHV power grid are becoming more and more complex, and the risk of safe and stable operation continues to increase [1–3]. The development trend of new energy will continue to accelerate, and the requirements of energy conservation and emission reduction are increasing. The complex internal and external environmental factors and the trend of power market-oriented reform have brought many challenges to the grid operation. The integration characteristics of power grid operation are prominent; The demand for global surveillance, network wide prevention and control, and centralized decision-making is increasingly prominent; Power market reform brings great pressure to power grid dispatching operation [4–7]. In order to meet the above challenges, the company proposed to develop and build a new generation of dispatching control system to further improve the technical support capacity of power grid dispatching control. At the same time, the company carried out © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1787–1793, 2021. https://doi.org/10.1007/978-981-33-4572-0_267
1788
D. Liu et al.
the research and pilot work of regulation cloud platform, and carried out in-depth exploration on platform support technologies such as virtualization and service [8–10]. When the traditional control cloud based on virtual machine forms deploys various business modules, the virtual machine resource consumption is excessive, the software environment compatibility is poor, and the rapid migration deployment ability is weak. When the number of service requests or computing tasks is too large, the limited hardware resources in the cloud platform can not meet the rigid requirements. When the task volume is very small, it often causes a lot of idle and waste of hardware resources, which brings great challenges to the whole operation and maintenance work, which poses a certain threat to the continuous promotion of the control cloud. Docker container technology with its advanced design concept, lightweight features, make it more and more widely used in the cluster system. With the docker container technology gradually entering the vision of developers, the construction of cloud platform has a new direction [11, 12]. Container cloud platform is mainly responsible for real-time resource monitoring, efficient allocation, node management, container scheduling and deployment, automatic publishing and API control. Container technology based on docker has been widely recognized in IT field, but in the field of power grid dispatching, especially in the construction of regulation cloud, how to use docker container technology to achieve efficient and reasonable utilization of resources is still in the exploration stage [13, 14]. This paper proposes the research and design of PAAS platform for dispatching cloud based on docker container technology, firstly, the application requirements of container technology in the control cloud are analyzed, and then the key technologies and the design of each module are analyzed in detail. Finally, the system performance of container response time, resource scheduling and elastic scaling are analyzed, the results show that the design of PAAS platform based on docker container technology has good performance in resource scheduling, container response and elastic scaling.
2 Demand Analysis of Docker Container Technology for Dispatching Cloud Platform Under the architecture of “physical distribution and logical unification”, the control cloud platform needs to use divide and conquer method to split the application and business vertically and horizontally. With the development of business microservice, more and more services are provided, and the dependency relationship between services is complicated, and the overall complexity of the system continues to rise. Virtualization technology takes the power grid business as a resource service mode, which can obtain the required resources from the resource pool anytime, anywhere, conveniently and on demand. However, with the expansion of the cluster scale, there are problems of low resource utilization and resource competition. In order to support the service management of centralized analysis and decision-making and global service sharing, the associated business application needs to be encapsulated into an independent unit in which the calculation process is closed. Using full virtualization technology to build independent unit has heavy resource load, which is not conducive to the rapid elastic expansion of business service unit. The emergence of container
Research on Dynamic Resource Scheduling Technology
1789
brings new opportunities for the new generation of dispatching control system. Compared with the traditional virtual machine, lightweight container has significant performance advantages and the convenience of application deployment. It can provide resource allocation and management in the form of container as the smallest computing unit. The docker container platform has good elastic scalability performance. Through real-time monitoring, it evaluates the utilization of cluster resources, and uses the dynamic allocation strategy of cluster resource allocation and recovery to start and recycle application containers on cluster nodes in a reasonable way, so as to maintain the balanced performance of application containers in the whole system and improve the overall operation efficiency of the system.
3 Design and Implementation of Dispatching and Control Cloud PaaS Platform Based on Container The technical architecture of the PAAS platform of the control cloud based on the docker container is shown in Fig. 1.
Fig. 1. Logic schematic of PaaS platform based on Docker container
The etcd in Fig. 1 is responsible for remembering the state information of these resources. Kube scheduler is responsible for specific resource scheduling tasks. Kubernetes provides a complete set of functions for application, such as resource scheduling, health monitoring, balanced disaster recovery, deployment and operation, service discovery, elastic scaling and so on. Users use the system by submitting a configuration list of pod to kubernetes. The system will automatically select the appropriate work node to run the pod and manage its life cycle.
1790
D. Liu et al.
This paper uses kubernetes platform to schedule resources. The scheduler component is used to schedule resources in the platform. Firstly, the scheduler module collects and analyzes the resource usage of each node in PAAS platform and makes decisions to allocate the newly built containers to the nodes with less load on PAAS platform, so that each node can balance the load. As shown in Fig. 2 is the resource scheduling process. Starting Send the request of the creating container Send request to apiserver The API server selects the processing method Relevant information isaccording to the written back to request type apiserver The scheduler collects and analyzes the load information of current cluster nodes The scheduler selects nodes according to the load information
Fig. 2. Resource scheduling flow chart
Scheduling strategy plays an important role in a large-scale docker cluster. Through the appropriate scheduling strategy, the tasks can be distributed reasonably according to the real processing capacity of the service node and the container itself, and the container that cannot be executed can be reasonably migrated when the service node has an exception. When the task request of the docker client arrives at the scheduling platform, the scheduling platform will first filter out the qualified nodes according to the set constraints (memory, CPU and other resource constraints). However, in the actual production environment, the deep integration between application load and system resources cannot be realized. Using container as the basic unit of resource scheduling can achieve flexible scalability of PAAS platform resources. Firstly, the platform controls the creation of containers according to resource scheduling. Then, flexible scheduling strategy is used to dynamically increase and decrease the number of containers according to the actual situation of load requests, so that the request quantity of tasks keeps dynamic consistency with the load of containers, that is, when the number of requests increases, the load of containers is higher. When the container load is large, it will send a request to the scheduling platform to add
Research on Dynamic Resource Scheduling Technology
1791
containers, and then select the appropriate node according to the load of cluster nodes, and assign the new container to this node. Continue to monitor the load status of the container. Once the resource utilization rate is lower than the set threshold, a request to destroy the container will be sent to the platform to release resources and realize the cluster shrinkage.
4 Experiment and Analysis In this paper, the performance of resource scheduling and container list display is verified. 4.1
Resource Scheduling Performance Analysis
Resource scheduling module is mainly to complete the reasonable allocation of storage resources to ensure the normal and efficient operation of the container. This paper mainly verifies that the resource scheduling module can independently select the nodes in the cluster according to the load of the nodes, and place the newly created containers on the nodes with sufficient resources as far as possible, so that each node can balance the load. Firstly, the application of dispatching and control cloud PAAS platform is made into an image, and then 100 containers are created on the container management platform based on this image, and the distribution of each container on each node over time is collected. As shown in Fig. 3, containers are basically evenly distributed on the container cluster nodes. node01 node02 node03
distribution of containers on nodes
Pod 40 35 30 25 20 15 10 5 0 0
21
95
200
400 time(s)
600
612
Fig. 3. Distribution of containers on nodes
4.2
Performance Analysis of Container List Display
the many containers are started in turn on the container management platform, the starting time required for each container is recorded, it is as shown in Fig. 4. At the same time, the applications in each container is deployed to the corresponding virtual machines, and the corresponding virtual machines is started in turn. The starting time of each virtual machine is shown in Fig. 5.
1792
D. Liu et al.
160
150
the responsing time of the container list display (ms)
140
responsing time /ms
120 100
100
110
106
86
90
93
96
80 60 40 20 0
1
2
3 4 5 6 7 Number of experiments
8
Fig. 4. Distribution of containers on nodes
160
the responsing time/s
140
153 135 121
120
the responsing time of virtual machine startup (s) 145 136 129 122 108
100 80 60 40 20 0
1
2
3 4 5 6 7 Number of experiments
8
Fig. 5. Starting time diagram of the virtual machine
It can be seen from the above figure, the average response time of container list is 92 ms, and the average startup time of virtual machine is 131 s, and the response time is minute level. Through the comparative analysis, it can see that the container cloud platform responds quickly and can provide a fast and convenient service.
5 Conclusions In this paper, docker container technology is used to build a containerized control cloud PAAS platform based on application development, deployment, and operating environment. It realizes the rapid construction, agile delivery and easy operation and maintenance of container resources dynamic deployment and power grid analysis and decision-making applications. In order to continuously improve the real-time sharing ability of the control information of the new generation dispatching control system, the
Research on Dynamic Resource Scheduling Technology
1793
application is complicated It will further improve the ability to control the large power grid and optimize the allocation of resources on a large scale. Acknowledgment. This work was supported by Science and Technology Program of State Grid Corporation of China under Grant No. 5442DZ190011 (Research on Key Technology of Application Operation Management Based on Container).
References 1. Shouyu, L., Rong, H., Huafeng, Z., et al.: A new generation of power dispatching automation system based on cloud computing architecture. South. Power Syst. Technol. 10(6), 8–14 (2016) 2. Hongqiang, X.: The architecture of dispatching and control cloud and its application prospect. Power Syst. Technol. 41(10), 3104–3111 (2017) 3. Yang, C., Zhiyuan, G., Shengchun, Y., et al.: Application of cloud computing in power dispatching systems. Electr. Power 45(6), 14–17 (2012) 4. Yun, L.: Research and Improvement of DevOps Operation and Maintenance System Based on Docker Platform. Comput. Knowl. Technol. 14(26), 2 (2018) 5. Zhenyu, C., Dapeng, L., Yunhao, H., et al.: A partition coordinated and optimized operation design for power grid wide area coordination and interaction service. In: Proceedings of 2nd IEEE Conference on Energy Internet and Energy System Integration, Beijing. IEEE (2018) 6. Zhenyu, C., Dapeng, L., Zhaoyun, D., et al.: The application of power grid equipment plug and play based on wide area SOA. In: Proceedings of 2nd IEEE International Conference on Energy Internet, Beijing, pp. 19–23. IEEE (2018) 7. Dapeng, L., Zhenyu, C., Zhaoyun, D., et al.: A wide area service oriented architecture design for plug and play of power grid equipment. Procedia Comput. Sci. 129, 353–357 (2018) 8. Xiaolin, Q., Zhenyu, C., Dapeng, L., et al.: Model management and service based power grid multi-agent dispatcher training simulator. In: Proceedings of 2nd IEEE Conference on Energy Internet and Energy System Integration, Beijing. IEEE (2018) 9. Ming, L.I.U., Junfeng, L.I.: Containerized deployment of kubernetes and IPv6 communication in containers. Electron. Technol. Softw. Eng. 16, 32–33 (2019) 10. Yaozhong, X., Junjie, S.: Present situation and technical prospect of smart grid dispatching control system. Power Syst. Autom. 39(01), 2–8 (2015) 11. Xiao, H., Fangchun, D., Guangyi, L., et al.: A big data application structure based on smart grid data model and its practice. Power Syst. Technol. 40(10), 3206–3212 (2016) 12. Hongqiang, X.: Structural design and application of common data objects for power dispatching oriented to dispatching and control cloud. Power Syst. Technol. 41(10), 3104– 3111 (2017) 13. Qianru, Z.: Design and Implementation of Container Engine Management Platform Based on PaaS.Beijing University of Posts and Telecommunications (2017) 14. Yi, Z., Yunhua, H., Ding, H.: Construction of lightweight PaaS platform for power grid based on Docker. Inf. Comput. (Theor. Ed.) 11, 75–78 (2017)
Application of Container Image Repository Technology in Automatic Operation and Maintenance of the Dispatching and Control Cloud Lei Tao, Yunhao Huang(&), Dong Liu, Xinxin Ma, Shuzhou Wu, and Can Cui Beijing Key Laboratory of Research and System Evaluation of Power Dispatching Automation Technology, State Key Laboratory of Power Grid Security and Energy Saving, China Electric Power Research Institute, Haidian District, Beijing 100192, China [email protected]
Abstract. With the continuous development of the container technology, the scale of the dispatching and control cloud built by it is expanding continuously. At the same time, the application portability of cloud platform based on virtualization is poor, the man-made operation and maintenance mode of the dispatching and control cloud is inefficient and the error rate is high. It is difficult to guarantee the safe and high-quality operation of power grid and the lean and efficient operation of the dispatching management. Using container technology to containerize cloud platform applications and relying on the Continuous Integration and Continuous Deployment (CI/CD) platform which can achieve efficient and accurate maintenance of the dispatching and control cloud. In this paper, the CI/CD platform which consists of the Jenkins,Kubernetes and Docker is introduced to complete the operation and maintenance tasks of the dispatching and control cloud,such as automatically agile construction, automatic publishing. Finally, through the analysis of the dispatching and control cloud, it is verified that the results show that the platform has a certain effect in automation construction, publishing efficiency, gray publishing update, operation and maintenance quality. Keywords: Dispatching and control cloud Continuous Integration and Continuous Deployment Jenkins+Kubernetes+Docker integration Automated operation and maintenance Container
1 Introduction As the support platform of the new generation dispatching control system, the control cloud is a service platform designed based on the concept of cloud computing and oriented to the power grid regulation business, reflecting the characteristics of hardware resource virtualization (sharing and dynamic deployment), data standardization and application service [1–3]. The management function of the dispatching and control cloud platform can collect the status information of each layer of the control cloud © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1794–1800, 2021. https://doi.org/10.1007/978-981-33-4572-0_268
Application of Container Image Repository Technology
1795
platform in a unified real-time manner, at the same time, monitor the operation status of the dispatching and control cloud platform, realize the centralized management of the operation state of the dispatching and control cloud platform, and realize the visualization and fine display of the operation information of the control cloud platform [4– 6]. The new generation of dispatching control system is changing from chimney architecture to shared service based architecture [7–9]. The current virtual resource management mechanism is not flexible enough in service deployment, upgrade, capacity expansion, fallback, offline, etc., which affects the flexibility of application and brings great challenges to field operation and maintenance [10–12]. It is necessary to explore a set of application software management process and supporting means covering software automatic construction, testing, release and deployment, so as to reduce the complexity of building the R & D environment, reduce the system upgrade cost, and improve the accuracy of testing, so as to effectively improve the efficiency of system development. the containerized application not only uses container technology to improve the convenience and practicability of application development, release and deployment, but also needs to deeply study how to guarantee and control the cloud container management platform from the perspective of cloud platform, so as to efficiently complete the tasks of container arrangement and deployment, resource scheduling, service discovery, health monitoring, etc. At present, the scale of regulatory cloud is gradually expanding, and the calling relationship between application services is more complicated. Container technology is the trend of cloud computing in the future. Automatic operation and maintenance technology based on container can effectively maintain large-scale cloud platform applications [13, 14]. In this paper, combined with the application requirements of regulatory cloud platform, the overall scheme design of the application of CI/CD platform based on Jenkins+kubernetes+docker integration in the operation and maintenance of control cloud is proposed, and its key technologies are studied, and the detailed design of each link of automation construction, automation deployment and automation upgrade is made.
2 Demand Analysis of Dispatching Control Cloud Operation and Maintenance 2.1
Dispatching Cloud Automation Operation and Maintenance Requirements
According to the dispatching cloud construction planning, the hierarchical mode of two-level deployment and multi-level maintenance is adopted. The control centers at all levels are composed of infrastructure as a service (IAAs), platform as a service (PAAS) and software as a service (SaaS). For the support platform with rich applications and powerful functions, the presentation of applications is quite different, which requires strong compatibility of the system environment. Based on saltstack, ansible and other operation and maintenance tools, the compatibility of system environment cannot be realized. it uses kubernetes+docker+Jenkins integrated automatic operation and
1796
L. Tao et al.
maintenance tools to achieve accurate maintenance and ensure the stability of the control system operation. 2.2
Dispatching Cloud Platform Environment Compatibility Requirements
Power dispatching is very important for the safe and stable operation of the power grid. At present, the application modules of the platform service layer in the whole control cloud are independent of each other, and the description of the operation status of the power grid is limited to their respective applications. The components in the cloud platform are not only independently deployed, but also independently developed. The R & D environment and test environment of each R & D team are different. When deployed and run in a unified system under the same platform, the environment will not adapt to the environment, resulting in the application integration can not be used normally, affecting the normal operation of cloud platform applications. Container technology has strong portability, compatibility and adaptability to the running environment, which can solve the problem of cloud platform environment compatibility.
3 Overall Design of CI/CD Platform for the Dispatching Cloud According to the management mode of “unified management and hierarchical scheduling”. The overall architecture of “regulatory cloud” is shown in Fig. 1. In the whole control cloud, the master node is in the core position, responsible for the management of metadata and dictionary data, the establishment and collection of data model; The cooperative node is responsible for the collection of the provincial model and data, and synchronization/forwarding of the master node data. The master node A node B node The cooperative node A node B node
Resource high speed synchronous network
The cooperative node A node B node
The cooperative node A node B node
...
Fig. 1. Architecture of “dispatching and control cloud”
In this paper, the scheme design of dispatching cloud CI/CD platform is as follows: complete the whole CI/CD process through kubernetes+docker+Jenkins integration,
Application of Container Image Repository Technology
1797
that is to control the complete automation of cloud application services from submission to construction, testing, pre-deployment and official environment release. Figure 2 shows the overall framework design of CI/CD platform.
Jenkins the applications of SaaS layer
The dispatching control cloud
Kubernetes the applications of PaaS layer
Docker
R&D environment
The platform of CI/CD
The release environment
Fig. 2. The overall framework of the CI/CD platform
The process of continuous integration and continuous release is shown in Fig. 3. dispatching cloud application services can fully automate the whole process from product development to submission, integrated construction, automatic testing, image building, pre-deployment and formal environment release. Developers submit the code to git repository, and use Jenkins as the basic software for continuous integration, and master-slave architecture is used to achieve code pull, unit test and code construction. Jenkins will apply for and create a pod (container group) from kubernetes platform, and use the pod as a slave node to complete the tasks of code pulling, testing, and building. After pod completes the above tasks, it will push the image to the harbor image warehouse to prepare for the deployment of kubernetes platform in the test and production environment.
Git Commit
Git Checkout
git Testing user Unit testing
Docker build
pod Deploy
Testing environment pod
pod
pod
docker pull docker push
harbor
docker pull
pod
user UI Kubectl API
pod LB
developing environment pod
user pod
Fig. 3. The flow chart of CI/CD
1798
L. Tao et al.
4 Experiment and Analysis In this paper, the efficiency of container automation deployment and the performance of container gray publishing are tested and analyzed. 4.1
Efficiency Analysis of Automated Deployment
In this experiment, three applications (power grid topic query, model service and permission service) in the dispatching and control cloud are started manually and started by using the automatic operation and maintenance platform designed in this paper, and the starting time is counted, as shown in Fig. 4. It can be seen from the figure that the automatic operation and maintenance platform designed in this paper can realize one click build and release, the whole process is fully automated, and the startup time of the application is short and the efficiency is high. time/s 180 163 150
138
125 120 90 60
63
58
52
30
permission service
model service manual start
power grid topic query auto start
Fig. 4. The statistical diagram of start-up time
4.2
Performance Analysis of Gray Level Publishing
In this experiment, 30 pods of the topic query application image are taken as the observation objects, and the update progress is specified as 16.7%, that is to say, five pods are updated each time. The performance of gray publishing is analyzed by observing the change of the number of pods in the old and new deployment. As shown in Fig. 5, when the health check is passed, the new version will expand 5 pods each time, while the old version will reduce the 5 pods, that will be cycled in turn until it is completely updated. The reason why the fourth group of data has not been updated is that the health examination has not passed. The experimental results show that the gray publishing designed in this paper has a certain feasibility, which can ensure the stability of the system, and realize smooth transition in the state of no customer perception.
Application of Container Image Repository Technology
1799
pod 35 30
30
30 25
25
25
20
20
20
20 15 15
15 10
10
10
10
5
5
5
0
0 1
2 3 4 Number of pods of the old version
5
6 7 Number of pods of the new version
8 times/n
Fig. 5. The performance analysis chart of grayscale release
5 Conclusions In this paper, the CI/CD platform is used to realize the automatic construction, automatic deployment, smooth transition, update and upgrade of container image. The application scheme of CI/CD platform based on Jenkins+kubernetes+docker integration proposed is feasible in dispatching cloud operation and maintenance, which can realize efficient construction and deployment, smooth transition and upgrading in the user’s unconscious state, reduce the amount of operation and maintenance tasks, and improve work efficiency. Acknowledgment. This work was supported by Science and Technology Program of State Grid Corporation of China under Grant No. 5442DZ190011 (Research on Key Technology of Application Operation Management Based on Container).
References 1. Song, Y., Zhou, G., Zhu, Y.: Present status and challenges of big data processing in smart grid. Power Syst. Technol. 37(4), 927–935 (2013) 2. Cleveland, F.: IntelliGrid architecture: power system functions and strategic vision. Utility Consulting International (2005) 3. Sheng, H., Liu, H., Zheng, L.: Research on information system automation operation and maintenance based on Docker technology. Digital Commun. World (11), 89+13 (2018) 4. Ling, Y.: Research and improvement of DevOps operation and maintenance system based on docker platform. Comput. Knowl. Technol. 14(26), 2 (2018) 5. Tao, Z.: Research and Application of Continuous Integration Based on Jenkins. South China University of Technology (2012) 6. Xing, B., Wang, G.: Design and implementation of equipment abnormal diagnosis system based on Python and Jenkins. Softw. Guide 16(11), 110–113 (2017) 7. Zhou, Y., Ou, Z., Li, J.: Research on continuous integration automatic deployment based on Jenkins. Comput. Digital Eng. 2, 267–270 (2016) 8. Jing, Z.: Design and implementation of docker distributed container automation operation and maintenance system based on K8S. Central South University for Nationalities (2018)
1800
L. Tao et al.
9. Liu, M., Li, J.: Containerized Deployment of Kubernetes and IPv6 Communication in Containers [J/OL]. Electron. Technol. Softw. Eng. 16, 32–33 (2019) 10. Xin, Y., Shi, J.: Present situation and technical prospect of smart grid dispatching control system. Power Syst. Autom. 39(01), 2–8 (2015) 11. Xiao, H., Fangchun, D., Guangyi, L., et al.: A big data application structure based on smart grid data model and its practice. Power Syst. Technol. 40(10), 3206–3212 (2016) 12. Xu, H.: Structural design and application of common data objects for power dispatching oriented to dispatching and control cloud. Power Syst. Technol. 41(10), 3104–3111 (2017) 13. Zhao, L., Zhang, L., Wang, Z., et al.: Design of human-machine interaction interface with multiple views for dispatching automation system. Autom. Electr. Power Syst. 42(6), 86–91 (2018) 14. Wang, R., Chen, F., Chen, Z., et al.: StudentLife: assessing mental health, academic performance and behavioral trends of college students using Smartphones. In: Proceedings of the 2014 ACM Conference on Pervasive and Ubiquitous Computing, pp. 3–14. ACM. New York (2014)
A User Group Classification Model Based on Sentiment Analysis Under Microblog Hot Topic Mengyao Zhang and Guangli Zhu(&) School of Computer Science and Engineering, Anhui University of Science and Technology, Huainan 232001, China [email protected], [email protected]
Abstract. User classification based on sentiment analysis makes it easier for researchers to understand the sentiment behavior characteristic of different groups. Since the current user classification method does not consider the fact of user’s sentiment fluctuation, this paper proposes a user group classification model takes into account the change of user’s sentiment. Firstly, the sentiment analysis of Microblog comments base on the Microblog sentiment dictionary. Calculate the user’s temporal sentiment vector and the user’s sentiment feature vector according to the rules made in this paper. Finally, k-means clustering is used to classify user based on the user’s sentiment feature vector. Experimental results validate the accuracy of the proposed model. Keywords: Sentiment analysis group classification
Microblog Dictionary Clustering User
1 Introduction In the 21st century, Microblog is an important platform for users to find information and express opinions. It is significance to analysis the sentiment behavior of users on Microblog. The basis of user groups classification based on user sentiment characteristic is sentiment analysis [1, 2]. Sentiment analysis methods can be roughly divided into 3 types: dictionary-based methods, machine learning methods, and deep learning methods. 1) This method of dictionary-based mainly to build sentiment dictionaries [3, 4]. Constructing a sentiment dictionary is to the existing sentiment dictionary combine new sentiment words. 2) The method of machine learning is to use the machine learning model to analyze the text’s sentiment tendency [5]. 3) The deep learning method is to use the deep learning model to learn the sentiment expressed in the text [6]. This paper uses dictionary-based methods for sentiment analysis. In this paper, we constructed a sentiment dictionary with 27850 words on the basis of the Chinese sentiment vocabulary database of Dalian University of Technology, and combining network words, negative words and adverbs of degree to. Each sentiment word in this sentiment dictionary contains weight and polarity.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1801–1807, 2021. https://doi.org/10.1007/978-981-33-4572-0_269
1802
M. Zhang and G. Zhu
The choice of characteristic and the method of user classification affects the accuracy of user group classification. However, the existing user classification model do not consider the user’s sentiment fluctuate, they just classify users with the same sentiment into one category. Therefore, the proposed model based on sentiment analysis should be able to solve the following problems: ① How to choose the characteristic of user. ② Whether the classification result are meaningful. The rest of the paper is organized as follows: Sect. 2 calculates the sentiment feature vectors of user. Section 3 uses the k-means algorithm to classify user group. Section 4 conducts experiments on the crawled Microblog corpus. Section 5 summarizes the full text.
2 User Sentiment Feature Vector Calculation In this section, we will calculate the sentiment feature vector of user. It mainly includes Microblog comments preprocessing and user sentiment feature vector calculation. 2.1
Data Preprocessing
Microblog comments contains a lot of noise, it is necessary to preprocess the Microblog comment to improve the accuracy and efficiency of sentiment analysis. (1) Trimming of Microblog texts: The crawled Microblog text is saved in the following format: Uj ¼ ðw1 ; w2 ; . . .wn Þ. where Uj represents user j, w1 ; w2 ; . . .wn represents the Microblog comment published by users in chronological order. (2) Remove useless symbols: Remove the topic identifier “#” and “@” identifier. (3) Translation: Translate English that appears in the text into Chinese. 2.2
Sentiment Analysis
(1) User temporal sentiment vector calculation. After preprocessing, the Microblog comment posted by each user is stored in chronological order. The sentiment analysis of the Microblog comment is based on the sentiment dictionary and semantic rules that has been constructed. The results of sentiment analysis,which are recorded in chronological is the temporal sentiment vector of user. (2) Calculate the user sentiment feature vector. Definition 1 Sentiment Feature Vector (SFV) Sentiment Feature Vector is used to represent the characteristics of user’s sentiment fluctuation. The calculation rules are as follows:
A User Group Classification Model Based on Sentiment Analysis
1803
1) If the number of Microblog posted by user is n, the user’s sentiment vector is ndimensional as ðs1 ; s2 ; . . .sn Þ.where s represents Microblog sentiment value. (2) If n ¼ 1, the user’s sentiment feature vector is ðs1 ; s1 ; s1 Þ. (3) If n ¼ 2, the user’s sentiment feature vector is ðs1 ; ðs1 þ s2 Þ=2; s2 Þ. (4) If n ¼ 3, the user’s sentiment feature vector is ðs1 ; s2 ; s3 Þ. (5) If n 4, the user’s sentiment feature vector is ðs1 ; ðs2 þ s3 þ . . .sn1 Þ=n 2; sn Þ. In this paper, the calculation rules are used to calculate the sentiment feature vector of user.
3 User Group Classification In this section, a k-means clustering algorithm is used to classify user groups under hot topics based on user sentiment feature vectors. This section contains 2 parts: 1) selection of k. 2) user classification. 3.1
Selection of K in K-means Algorithm
The k-means clustering algorithm is extremely sensitive to the initial value of k, and the choice of k determines the quality of the clustering result. In this paper, the elbow method is used to determine the initial value of k through the sum of squared errors (SSE), which refers to the clustering error of all samples and represents the quality of the clustering result. The relationship between SSE and k is the shape of an elbow, and the corresponding k of the elbow is the number of clusters. The calculation formula is as follows (1): SSE ¼
k X X i
j x xi j
2
ð1Þ
x2ci
where ci represents the ith cluster, x represents the sample point in the cluster ci , xi represents the average value of all sample points in ci . 3.2
K-means Clustering Algorithm for User Classification
The k-means clustering algorithm based on the user’s sentiment feature vector is as algorithm 1.
1804
M. Zhang and G. Zhu
The calculated user sentiment feature vector and the number of clusters determined by the elbow method are the input of algorithm 1. The method of calculating the distance from the sample to the centroid in the algorithm uses the Euclidean distance method. The Euclidean distance calculation formula is as formula (2): distðX; Y Þ ¼
ffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Xn 2 ð x y Þ i i¼1 i
ð2Þ
where X ¼ ðx1 ; x2 ; . . .xn Þ and Y ¼ ðy1 ; y2 ; . . .yn Þ. The cluster center calculation method is as formula (3): ai ¼
1 X x jci j x2ci
ð3Þ
where the ai is the cluster center vector of cluster ci .
4 Experiment and analysis of results In this section, we will give experimental methods, results and experimental analysis.
A User Group Classification Model Based on Sentiment Analysis
4.1
1805
Experimental Data
The data required for the experiment in this paper is crawled from Microblog. The crawled information includes user id, release time of user Microblog comment. We crawling 9189 Microblogs under the topic “# 考研应不应该压分#”, (e.g.: Should the postgraduate entrance exam score be lower than the real score) including 3139 users. Four experimenters were randomly selected to manually label the sentiment of the comment of Microblog topics, the results were statistically calculated after manual labeling. 4.2
Experimental Data and Evaluation Indicators
The experimental evaluation standard uses Davies-Bouldin Index as the evaluation method. The DBI calculation method is as formula (4) and formula (5): avgðcÞ ¼
X 2 dist xi ; xj jcjðjcj 1Þ 1 i j jcj
k avgðci Þ þ avg cj 1X Þ maxð DBI ¼ k i¼1 dist li ; lj
ð4Þ
ð5Þ
where li , lj represents the center point of cluster ci ; cj respectively. avgðcÞ represents the average distance between samples in cluster c. 4.3
Experimental Methods
In order to verify the effect of the user group classification model based on the sentiment analysis of the Microblog comment. This paper obtained user comment from Microblog topic to experiment. The specific experimental operations are as follows: (1) Get Microblog comment data. Collecting Microblog comments from Microblog topic. Denoise and filter out comments suitable for sentiment analysis. (2) Microblog comment sentiment annotation. Manually annotate the sentiment value of Microblog comments. (3) Calculate the user sentiment feature vector. Use the method proposed in this paper to calculate user temporal sentiment vector and user sentiment feature vector. (4) User group classification. The k-means clustering algorithm is used to classify the user group by the sentiment feature vector of user. 4.4
Experimental Results and Analysis
According to the above experimental steps, the data was analyzed and tested. The experimental results are shown in Figs. 1 and 2.
1806
M. Zhang and G. Zhu
Fig. 1. Relationship between SSE and k
Fig. 2 User classification results
When classify users with the k-means method, use the elbow method to determine the k in this paper. Figure 1 shows the relationship between SSE and the number of clusters. It can be seen from Fig. 1 that 5 is the elbow, in this paper k is selected as 5. Figure 2 shows the number of users included in each category when classify user into five parts. Use the formula (4), formula (5) to judge the quality of the clustering result. the evaluation index DBI is 0.169, which explained that the model of user classification proposed in this paper can provide accurate results of user classification.
5 Conclusions The accuracy of user classification results can be improved by using the user sentiment feature vector as a feature to classify users. The following work will continue to study user temporal sentiment vector and user sentiment feature vector. Acknowledgement. This Research work was supported in part by 2018 Cultivation Project of Top Talent in Anhui Colleges and Universities (Grant No. gxbjZD15), in part by 2019 Anhui Provincial Natural Science Foundation Project (Grant No. 1908085MF189).
References 1. Zhang, S.X., Wang, Y., Zhang, S.Y., Zhu, G.L.: Building associated semantic representation model for the ultra-short Microblog text jumping in big data. Cluster Comput. 19(3), 1399– 1410 (2016) 2. Islam, M.Z., Liu, J.X., Li, J.Y., Liu, L., Kang W.: A semantics aware random forest for text classification. In: ACM, pp. 1061–1071 (2019) 3. Maria, G., Manolis, G.V., Konstantinos, D., Athena, V., George, S., Konstantinos, C.C.: Sentiment analysis leveraging emotions and word embeddings. Expert Syst. Appl. 69, 214– 224 (2017) 4. Wu, F.Z., Huang, Y.F., Song, Y.Q., Liu, S.X.: Towards building a high-quality Microblogspecific Chinese sentiment lexicon. Decis. Support Syst. 87, 39–49 (2016)
A User Group Classification Model Based on Sentiment Analysis
1807
5. Megha, R., Aditya, M., Daksh, V., Rachita, S., Sarthak, M.: Sentiment analysis of tweets using machine learning approach. In: Eleventh International Conference on Contemporary Computing, pp. 1–3 (2018) 6. Chen, T., Xu, R.F., He, Y.L.: Improving sentiment analysis via sentence type classification using BiLSTM-CRF and CNN. Expert Syst. Appl. 72, 221–230 (2017)
University Education Resource Sharing Based on Blockchain and IPFS Nan Meng and Shunxiang Zhang(&) School of Computer Science and Engineering, Anhui University of Science and Technology, Huainan 232001, China [email protected], [email protected]
Abstract. A large number of educational resources are stored in major university databases, resources are not fully shared and resulting in waste of resources. In order to solve the above problem, a method of university education resource sharing based on blockchain and IPFS is proposed in this paper. Using the technology of smart contract and digital signature to authenticate the identity of universities and establish the alliance chain of universities. The design of IPFS storage combined with blockchain technology and university education resources are stored in IPFS distributed storage and the hash addresses and basic information are stored in the blockchain to achieve the protection and sharing of university education resources. The scheme proposed in this paper can not only improve the storage security of information resources, but also enhance the collaborative processing of educational resources among universities, which is conducive to the sharing of university educational resources. Keywords: Blockchain
IPFS Resource sharing Digital signature
1 Introduction In 2015, the State Council issued a series of “Internet +” related policies to encourage educational enterprises and IT companies to provide online education services, develop digital educational resources and explore new models of online education services [1]. On January 29, 2020, the National Open University combined with many universities to quickly integrate quality curriculum and provide a large number of free courses for scholars to learn. While the sharing of resources in existing major universities has brought convenience to online education, the following major problems still exist. At present, the educational resources of most universities are independently managed and operated by universities. Due to the inconsistency of the construction of standard management systems, it is difficult to achieve the effective sharing of educational resources and resulting in the waste of educational resources. The blockchain technology is a distributed ledger technology or ledger system [2, 3], and it uses a decentralized infrastructure [4] and distributed storage consensus technology [5] to securely store transaction data. IPFS (Inter Planetary File System) is a peer-to-peer distributed file system, which has the advantages of saving network storage space, fast download speed, and more secure network [6]. The combination of two technologies is expected to solve the problem of poor sharing of educational © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1808–1813, 2021. https://doi.org/10.1007/978-981-33-4572-0_270
University Education Resource Sharing
1809
resources in existing universities [7]. Based on blockchain, IPFS and digital signature technology, this paper proposes the following solutions. (1) Building alliance chain among universities and reviewing through consensus mechanism, jointly manage and maintain the stable operation of blockchain. (2) Encrypt storage of university education resources through private IPFS clusters, cooperate with smart contracts on the blockchain to store file hashes and other basic information and realize the protection, verification and sharing of resource information. The rest of this paper is organized as follows: Sect. 2 introduces the system design, including the system architecture, smart contract architecture and system storage design; Sect. 3 describes the specific design of the method; Sect. 4 analyzes the performance of the method and Sect. 5 summarizes the work of this paper.
2 System Design 2.1
The Design of System Architecture
As shown in Fig. 1, the method of university education resource sharing provided by this article is composed of four parts: university resource base, alliance blockchain, private IPFS cluster, and system decentralized application(DApp). Among them, the university is the authoritative node of the alliance chain. It participates in the services of uploading, protecting and sharing educational resources. The alliance blockchain adopts the Ethereum based on the PBFT consensus algorithm. They realizes functions such as node joining and education resource sharing through smart contracts. The private IPFS cluster stores the educational resources in the university resource base, and verifies the identity of the nodes through the swarm-key. The data security is guaranteed by technologies such as DHT and Merkle DAG. The system decentralized application (DApp) provides the system with smart contracts and IPFS interface calls, and does not itself participate in data storage. University Alliance
UniversityA
UniversityB
UniversityC
Resource upload, protection, sharing
Users
Dapp
Access to resources
Educational resource sharing system
Alliance chain Genesis block
Block 1
Block 2
Block n
Private IPFS cluster
Fig. 1. System architecture diagram.
1810
2.2
N. Meng and S. Zhang
The Design of Smart Contract
The smart contracts in the alliance chain include Resource Sharing Management Contract (RSMC), College Management Contract (CMC), Resource Management Contract (RMC), etc. As a global contract, RSMC records the basic information of all university resource bases in the alliance chain and their related CMC and RMC. When creating an RSMC contract, the basic information of the first university resource pool and related contracts are created. CMC makes a decision on the request of new colleges to join the alliance through democratic voting. RMC is used to implement functions such as uploading and downloading of educational resources. 2.3
Storage Design of System
IPFS (Inter Planetary File System) is a distributed storage network based on blockchain technology. All nodes in the IPFs network will form a distributed file system, which has the advantages of fast access, no tampering, less data redundancy and so on. Blockchain is not suitable for storing a large number of files, “IPFS+blockchain” can solve the problem of blockchain. Each node of the blockchain does not need to store hundreds of megabytes of various files. The files are stored outside the chain and their hash values are stored on the chain.
3 Method Design 3.1
Joining Alliance Chain Method of Nodes
Each university that wants to join the alliance chain sends an application to join all nodes of the alliance chain. They obtain the consent of more than half of the alliance members and the registration can be completed. The method uses the PBFT consensus algorithm. It does not require all nodes to be online. There are more than two-thirds of online nodes and the entire system can work normally, effectively improving the efficiency of consensus (Fig. 2).
Login
submit application
Node voting
the consent rate exceeds half
Y
Generate public and private key pairs Join the chain N
Network-wide broadcast
End
Fig. 2. Flow chart of node joining.
University Education Resource Sharing
3.2
1811
Upload and Protection of Universities Educational Resources
The upload and protection of educational resources in universities refers to the storage of educational resources on the blockchain through the cooperation of alliance chains and IPFS private clusters to ensure the safe storage of their content, thereby achieving the purpose of protecting educational resources. Uploading education resources is to save the encrypted education resources on the IPFS private cluster, and store the characteristic values of education service resources in the blockchain platform to ensure that they are not tampered with and permanently stored. Algorithm 1. Upload and Protection of Educational Resources in Universities resource resource-Name
Input Output
college-Name
void
1: Procedure SaveEduResource(resources, resource-Name, college-Name) 2:
system executes:
3:
generate a random keyPair
4:
AES
5:
extract(
6:
combine(Resources
8: 9: 10: 11:
(Resources, APK,ASK)
Encrypted-Resources
)
(
7:
GSK< APK,ASK >
,resource-Name, college-Name) )
Resources-Obj
signature
contract executes: if RSMC RMC
(signature)= true then (Resources-Name, Ipfs-Addr, Hash, Time)
end if
12: end Procedure
3.3
Download and Protection of Universities Educational Resources
When users download or use educational resources, they need to verify whether they have permission to download or use the resources. The system will call the smart contract to query the relevant information stored in the blockchain network and IPFS and automatically perform information matching. If the information matches successfully, the user is allowed to download or use the resource, otherwise, the user will not be able to download or use the resource. This completes the download or use of digital education resources (Fig. 3).
1812
N. Meng and S. Zhang
university
user
Alliance chain
IPFS
Step 1: querying resource address identitying check IPFS address
Writing contract
Step 2: IPFS address Encrypt document Step 3: obtaining the decryption key
Decryption key pair
Step 4: obtaining resources
Fig. 3. Resource sharing and download sequence diagram.
4 Analysis of System Performance System requirements analysis includes two aspects: function and performance. Performance requirement analysis mainly considers that the system functions can reach the target in the operation process. For a complete system, performance requirement analysis is very important. (1) Usability index analysis: For users, the operation is the same as the centralized system, and it is in line with the operating habits. Teachers and students in universities have a certain level of computer operation, which meets the requirements of system ease of use. (2) Scalability index analysis: This system is based on the P2P network architecture, which has good scalability. When adding new functional modules, it will not cause too much impact on the original system architecture and functional modules. It can expand the system’s functions according to the changes in actual demand. (3) System robustness index analysis: The P2P architecture is inherently resistant to attacks and highly fault-tolerant. Since services are distributed among various nodes, the damage to some nodes or the network has little impact on other parts. (4) System security index analysis: Blockchain technology uses blocks to form chains for data storage, and uses cryptographic mechanisms to ensure data storage and transmission security. It uses encryption algorithms and consensus algorithms to ensure that data is not tampered with or forged. These prominent security features ensure the safety of the system.
University Education Resource Sharing
1813
5 Conclusions In this paper, we propose a sharing scheme of university education resources based on blockchain, IPFS, distributed storage, smart contract and digital signature. Building alliance blockchain between universities, using IPFS to store massive resources and ensuring safe and reliable data storage, reducing storage costs and greatly improving file access performance. The system mode we proposed can be applied to the learning management system, universities upload digital education resources, and users can download the resource after the authentication, which make college education resources fully shared. Acknowledgement. This Research work was supported in part by 2018 Cultivation Project of Top Talent in Anhui Colleges and Universities (Grant No. gxbjZD15), in part by 2019 Anhui Provincial Natural Science Foundation Project (Grant No. 1908085MF189).
References 1. Li, H.: Research on online education ecosystem and its evolution path. China Distance Educ. (01), 270, 62–70 (2017) 2. Ton, C.L., Lei, X., Lin, C., Weidong, S.: Proving conditional termination for smart contracts. In: Proceedings of the 2nd ACM Workshop on Blockchains, Cryptocurrencies, and Contracts (BCC 2018), pp. 57–59. Association for Computing Machinery, New York (2018) 3. Maoning, W., Meijiao, D., Jianming, Z.: Research on the security criteria of hash functions in the Blockchain. In Proceedings of the 2nd ACM Workshop on Blockchains, Cryptocurrencies, and Contracts (BCC 2018), pp. 47–55. Association for Computing Machinery, New York (2018) 4. Leila, I., Heba, H., Mahra, A., Manayer, A., Noura, A.: Towards a Blockchain deployment at uae university: performance evaluation and Blockchain taxonomy. In: Proceedings of the 2019 International Conference on Blockchain Technology (ICBCT 2019), pp. 30–38. Association for Computing Machinery, NewYork (2019) 5. Santiago, B., Matteo, M., Guillermo, P., Elisa, G.B.: Towards scalable Blockchain analysis. In: Proceedings of the 2nd International Workshop on Emerging Trends in Software Engineering for Blockchain (WETSEB 2019), pp. 1–7. IEEE Press (2019) 6. Antonio, T.F., Samer, H., Juan, P.: Open peer-to-peer systems over Blockchain and IPFS: an agent oriented framework. In Proceedings of the 1st Workshop on Cryptocurrencies and Blockchains for Distributed Systems (CryBlock 2018), pp. 19–24. Association for Computing Machinery, New York (2018) 7. Joseph, K.L.: A block chain-enabled society. In: Proceedings of the 2019 ACM International Symposium on Block chain and Secure Critical Infrastructure (BSCI 2019), pp. 472–480. Association for Computing Machinery, New York (2019)
Examining the Relationship Between Foreigners in China Food Delicacies Using Multiple Linear Regression Analysis (sklean) Ernest Asimeng, Shunxiang Zhang(&), Asare Esther, and Mengyao Li School of Computer Science and Engineering, Anhui University of Science and Technology, Huainan 232001, China [email protected], [email protected], [email protected]
Abstract. The availability of some specific foods and the nutrients in foods have a great impact on everyone’s life. It goes to the extent of being among the determining factors of an individual’s long-term stay in a foreign country. In this paper, we examine the relationship between the long-term stay of foreigners in China and their food delicacies. For the analysis of data (data from administered questionnaires and interviews), a model was built using python with sklearn in multiple regression, with the coefficient, the intercept, and the adjusted r-square well examined. The research shows a correlation between foreigners living in China and their food delicacies, indicating that though foreigners consider the nutritional and health benefits of cuisine, the taste of the cuisine is prioritized, making their living conditions somewhat difficult when choosing a nutritious cuisine. Keywords: Intercept R-square Dummy variables programming Multiple linear regression
Data analysis Python
1 Introduction Due to globalization, people travel around the world for various reasons; school, business, leisure, etc. Though there is much to consider when traveling or deciding to stay in a foreign country either than one’s home country, be it long-term or short-term, one of the most common things people consider is food. Nutritious food does not sorely depend on how delicious the food is, but also it constitutes factors such as health benefits, nutritional components, and others. Research shows that about 600,000 expats lived in China in 2018 alone, with most foreigners living in Guangdong, Shanghai, and Beijing. This research stated that 18% of expats are in China on the token of their employers, 17% for adventure. The other 65% may be in China for educational purposes and/or others. This same research stated that 76% of expats indicated their general satisfaction about living in China, and 1% were extremely unhappy. Among the potential drawbacks that were mentioned concerning staying in China by foreigners was personal health; thus, most expats/foreigners are concerned about their health in China. One cannot talk about © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1814–1823, 2021. https://doi.org/10.1007/978-981-33-4572-0_271
Examining the Relationship Between Foreigners in China Food Delicacies
1815
health without the mention of food [3, 4]; hence, the goal of this paper, to examine the relationship between the stay of foreigners in China and Chinese indigenous food. Food is essential such that it plays a significant role in our daily lives. Especially in China, people usually show respect to foreigners or treat foreigners to food to make new friends, establish relationships, or talk about business. Nutritious food can directly influence one’s biological function throughout life. When people enjoy nutritious food, it influences many other factors and areas in life (e.g., builds our mental faculty, builds the physique, etc.).
2 Related Work In this section, we discuss studies that are similar to the goal that this paper seeks to achieve. This paper (i.e. College Students and Eating Habits: A Study Using an Ecological Model for Healthy Behavior) looked into the eating habits of college students in order to understand the reason behind the dramatic increase in overweightness and obesity rates in the United States (US). This research used a qualitative research design to analyze factors that college students in the US perceived as influencing healthy eating behaviors. A group of Cornell University students (n = 35) participated in six semistructured focus groups. A qualitative software, CAQDAS Nvivo11 Plus, was used to create codes that categorized the group discussions while using an Ecological Model [2]. “International Students’ Eating Habits and Food Practices in Colleges and Universities” recognizes the impact of mobility from one country to another on international students, hence it researched into the role of dietary acculturation on international student’s movements. International students’ mobility affects their dietary habits, food choices, and physical behavior. As international students move from one country to another, they may adopt to the culture of their host country. This adoption may affect their food practices and choices, and can lead to changes in the students eating behaviors. This paper provided a research context and drew out documented issues such as: impact of dietary acculturation on international students, changes in dietary habits and food practices following temporal migration, and health consequences due to the dietary acculturation [1, 7]. Similarly, in our paper we seek to find the relationship between the stay of foreigners in China and Chinese indigenous food just as the papers above found the relationship between the increase in overweightness and obesity rates in the United States among college students and international students eating habits and food practices in colleges and universities respectively. The differences between this paper and the two mentioned above lie in the description of the population, sample size and methodology and the goals that each set to meet. Though regression analysis will be done in this paper also, python will be used in programming a model based on answers from respondents of the questionnaire.
1816
E. Asimeng et al.
3 Subjects Data was collected from 242 foreigners in China from June to November of 2019 online, 76.03% male (184 respondents), and the rest female (i.e. 23.97%). The age groups of the respondents were mainly between 18–35 years, only a few were outside the age group. Most of the population were students (225 students), only two were a tourist. 3.1
Methodology and Data Analysis
This is a descriptive study, analyzing the significance of the three independent variables in the model (Health benefit, Nutritional facts, and taste) on the stay of foreigners in china [5]. First, a survey and a face to face interview were carried out to collect data randomly from foreigners (Students, workers, tourists, and business entrepreneurs) living in China. Secondly, the data collected was preprocessed to help us build a good model using the python programming language. In building a multiple regression model in python, we import libraries, import the dataset, encode our categorical data, and analyze the results after executing it. Below is a simple graphical image of the step-by-step process and methods used in this research.
4 Multiple Linear Regression Vital issues that arise when carrying out multiple linear regression analysis are discussed in the next section as well as other issues about programming in python; including the necessary theory for multiple linear regression, model building, the underlying assumptions other issues, creating an algorithm by programing in python with census data and interpretation of results. 4.1
Theory of Multiple Linear Regression
Multiple regression analysis is about the best fitting model. It is a flexible method of data analysis that may be appropriate whenever a quantitative variable (the dependent or criterion variable) is concerning any other factors (expressed as independent or predictor variables). In this research, multiple linear regression analysis is used to build the best model to examine the relationship between the stay of foreigners in China based on the type of cuisine they consume. In multiple linear regression, there are ‘n’ explanatory variables, and the relationship between the dependent variable (Cuisine) and the independent or explanatory variables (Health benefit, Nutritional facts, and taste) is represented by the following equation [6]: y ¼ b0 þ b1 x1 þ b2 x2 þ bk xk þ e
ð1Þ
Examining the Relationship Between Foreigners in China Food Delicacies
1817
Where ‘y’ is the dependent value, ‘b0’ is the intercept and the independent variables are from ‘x1’ to ‘xk’, ‘b1’ to ‘bk’ are the corresponding coefficient and Ɛ is the error. This means Nutritious cuisine = b0 + b1health + b2nutrient + b3taste. 4.2
Assumptions
Multiple regression is crucial not to violate any of its assumptions; this is done to determine whether the inclusion of additional predictor variables leads to the increased prediction of the outcome variable. In this research, two (2) assumptions are outlined: Linearity: First of all, in multiple linear regression, the relationship between the independent and dependent variables need to be linear; that is why multiple linear regression is sensitive to outlier effects. This means that if the relationship is not linear, the data is not supposed to be used before transforming it appropriately. Mathematically it is expressed as b0 + b1x1 + b2x2 +. . . bkxk + Ɛ, where ‘b’ is the Coefficient, x is the independent variable, and Ɛ is the error [8]. Multicollinearity: This is observed when two or more variables have a high correlation between each other. Mathematically is written as: Pxi xj 6¼ 1 : 8i; j; i 6¼ j. Whenever there is perfect multicollinearity, it imposes a big problem to our regression model because the Coefficient will be wrongly estimated. If one model can represent the other, then there is no reason for using it both. Also, if we have imperfect multicollinearity, the assumption is still violated. It causes the problem to the model, to fix this model, we have to drop one variable, which is always N−1. When encoding our categorical variables, we observed that one of our dummy variables had to be dropped, which was the ‘OTHERS’ to fit our model perfectly. From the diagram 2.1, in index ‘0’ Chinese cuisine was represented by ‘1’, local cuisine was ‘0’, and western cuisine was ‘0’ In index ‘1’, Chinese cuisine was represented by ‘0’, local cuisine by ‘1’ and western cuisine by ‘0,’ and in index ‘2’, Chinese cuisine was represented by ‘0’, Local cuisine by ‘0’ and western cuisine by ‘1’, but in the last index we noticed that all the cuisine has zero (0), which means that our dummy variable trap has been taken care of [8]. 4.3
Dummy Variables
A dummy variable is a limitation or a copy that stands as a substitute. In regression analysis, a dummy variable is used to include categorical data into the regression model. Regression analysis does not only use numerical data, but it also uses variables such as ‘types of cuisine’. Since regressions are based on mathematical functions, it is imagined that it is not ideal to have categorical data (cuisine) in the dataset. The dependent variable dataset contains variables that one cannot quantify; those observations were converted into a dummy variable. In this case we consider numerical values into 1 s and 0 s. In the case of dummy, we are imitating the categories with numbers. The types of ‘cuisine’ (i.e., Chinese Cuisine, Local Cuisine, Western Cuisines, Other Cuisines), which is the dependent variable, is also a categorical data. Hence, the need to code them as numerical variables in order to fit our regression model. In this
1818
E. Asimeng et al.
Fig. 1. Representation of Categorical data as dummy variables
case, the cuisines were coded into 1s and 0s, which means our regression models will yield the best performance because all the observations are quantifiable. We created a new column for each of our categorical variables and assigned a binary number of ‘1’ and ‘0’, as shown in Fig. 1. In index ‘0’, we assign ‘1’ to Chinese cuisine, and the rest of the cuisine was 0’s. In index ‘1’, we assign ‘1’ to our local cuisine, and the rest of the cuisine was 0’s. In index ‘2’, we assign ‘1’ to western cuisine, and the rest was also 0’s. Moreover, in the last index, we assign ‘1’ to other cuisines, and the rest was also 0’s, which means our categorical data (cuisine) has been encoded, and we can now build our model. 4.4
Standardization
Standardization was applied for transforming our data into a standard scale before calculating for our adjusted r-square and the Coefficient. In standardizing a variable, first, the mean is subtracted from the value for each case, resulting in a mean of zero. Then, the difference between the individual’s score and the mean is divided by the standard deviation, which results in a standard deviation of one. Mathematically written as: xl r , where: x = original variable, l = of original variable, r = standard deviation of original variable. 4.5
Adjusted R-Square
The adjusted R-square is the basis for compelling models. The adjusted R-squared compares the explanatory power of regression models with two or more independent variables. Every independent variable, added to a model, increases the R-square (Coefficient of determination) value and never decreases it. It makes a model overfitting if there is more than one independent variable. A model that includes several predictors will return higher R-square values and may seem to be a better fit. Therefore, the adjusted R-squared compensates for the addition of variables and only increases if the new predictor enhances the model above what would be obtained by probability [5]. Adjusted R-Squared can be calculated mathematically in terms of sum of squares as: SSreg =dfe 2 Adjusted R2 = R ¼ 1 SStot =dft
Examining the Relationship Between Foreigners in China Food Delicacies
R2 adjusted ¼ 1
ð1 R2 ÞðN 1Þ NP1
1819
ð3Þ
Where R2 = Sample R-square, P = Number of predictors and N = Total sample size.
A
B
Fig. 2. Calculation of Adjusted R-Square in Python (Figure B is the result of r-square, Figure A is the result for adjusted r-square)
The figure above illustrates the result of 0.917 and 0.668 (3 decimal place) for our r-square and the adjusted r-square. The adjusted R-Square is always smaller than the RSquare. This always penalizes the excessive use of variables. The R-square of 0.917 was estimated, which helps us determine how well our regression model makes predictions and whether the model provides a good fit for the existing data. The adjusted rsquare of 0.667, as shown in Fig. 2B above, is used to compare the goodness-of-fit for our regression model. With an adjusted R-square of 0.667, we are 66.7% confident of our model. 4.6
P-Value and the Coefficient
The P-value and Coefficient in regression analysis work together to tell us which relationship in our model is statistically significant, and the nature of those relationships. The Coefficient describes the mathematical relationship between each variable of the independent and the dependent variable. The P-values for the coefficients indicate whether these relationships are statistically significant. P-values are one of the best ways to determine if a variable is redundant, but they provide no information whatsoever about how useful a variable is. The correlation coefficient, r, is calculated using: CovðX; YÞ r ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi VarðXÞVarðYÞ
ð4Þ
1820
E. Asimeng et al.
Where, Is the variance of x from the sample, which is of size n. n P
VarðyÞ ¼
i¼1
ðyi yÞ2 ð5Þ
n1
Is the variance of y, and, n P
Varðx; yÞ ¼
i¼1
ðxi xÞðyi yÞ ð6Þ
n1
Is the covariance of x and y.
A
B
Fig. 3. The p-value and the intercept in Python (Figure A is the result of coefficient and p-value, Figure B is the result for the intercept)
The intercept in a multiple regression model is the mean for the response when all explanatory variables take on the value 0. A higher intercept of 2.967 (3dp), as illustrated in Fig. 3(B), tells us how significant and best our model is. The p-values of 0.433, 0.296, and 0.736, as shown in Fig. 3(A), indicates that if we conduct the test, there will be a chance that the null hypothesis stands, so there is no reason to change our mind with this p-value. There is a significant level of each one of these coefficient variables, nutrient and taste with a positive coefficient of 0.009 and 0.033 respectively, indicates a higher impact to our model and ‘health’ of a negative coefficient of -0.052 indicates a lower impact to our model, which brings us to feature selection to the standardization of our model. Because our model was standardized, as shown in Fig. 2, we do not need to identify the worst performance features. It automatically penalizes for having a minimal coefficient. Implementation of Multiple Regression using python (sklearn). Below is the implementation of multiple regression using python (Figs. 4 and 5).
Examining the Relationship Between Foreigners in China Food Delicacies
1821
Algorithm 1: Construction of algorithm (code) to build a model in python (jupyter) with sklearn to examine the long stay of foreigners and their indigenous food accessibility. Input: importing the libraries, importing the dataset, encoding categorical data, avoiding the dummy variable trap, fitting multiple linear regression, predicting the result. Output: dummy variables, adjusted R-square, coefficient, Predicted result. 1: importing the libraries 2: df = pd.read_csv(‘file name’) 3: dummies = pd.get_dummies(df.name) 4: dummies.drop(‘name’,axis = ‘columns’) 5: x = df [‘x values’] 6: y = df [‘y value’] 7: from sklearn.preprossing import StandardScaler 8: scale = StandardScaler() 9: scale.fit(x) 10: x_scaled = sccaler.transform(x) 11: reg = LinearRegression() 12: reg.fit(x_scaled_y) 13: from sklearn.linear_model import LinearRegression 14: regressor = LinearRegression() 15: regressor.fit(x,y) 16: y_pred = regressor.predict(x) Output: dummy variables, standardized values, adjusted R-square, p-value, coefficient, Predicted result.
Fig. 4. Multiple regression programing in python
Fig. 5. Representation of predicted result.
4.7
Predicted of Result
Having a model that gives us more accuracy, we were able to achieve a result of −147.507, −34.317, −46.580, and 27.870 for Chinese cuisine, local cuisine, western food, and other cuisines. This result was achieved by building on the models for examining foreigner’s stay based on their indigenous food accessibility. Firstly, the predicted result of Chinese cuisine shows that most foreigners living in China consumed more Chinese cuisine than other Chinese cuisines. However, they
1822
E. Asimeng et al.
consider it not to be nutritious; the reason behind this is that most foreigners take into consideration the price and taste of food without considering the health benefit and the nutritional facts. Secondly, the predicted result of local cuisine shows that most foreigners consider their local cuisine more nutritious than any other cuisine. However, they consume less of their local cuisine because it is costly to consume one’s local cuisine compared to Chinese cuisine in China. On the other hand, the predicted result of western food shows that few foreigners prefer to consume western food. However, they consider it not to be nutritious, and the reason behind this is that most foreigners think their local cuisine are nutritious than any other cuisine. Lastly, the predicted result of other cuisines indicates that a few foreigners enjoy other cuisine and see it as more nutritious and consume it most in China.
5 Conclusion In this study, a model for examine the long term-stay of foreigners and their indigenous food accessibility was built using only three variables, namely health, nutrients, and taste to predict the cuisine enjoyed by foreigners in China. The assumptions to help us built a good model in multiple regression were well underlined. Our model builds on cuisine for foreigners is highly significant since our data was firstly standardized with a positive value of intercept 2.97 and a coefficient of −0.052, 0.009, and 0.033 for health, nutritional facts, and taste, respectively. Our model on the adjusted r-square value is highly significant since the adjusted r-square is smaller than the r-square with 0.668 and 0.917, respectively. The co-efficient with the positive variables putting nutrients and health into consideration for foreigners indicate that 66.8% of foreigners will be influenced positively in their stay because they can access and enjoy food while staying in China; whiles the co-efficient with negative variable putting taste into consideration for foreigners shows that foreigners depend on taste of cuisine when choosing a nutritious cuisine which has a negative influence on their stay. The predicted results show that most foreigners consume more Chinese cuisine than any other Cuisine. Acknowledgement. This Research work was supported in part by 2019 Anhui Provincial Natural Science Foundation Project (Grant No. 1908085MF189), in part by 2018 Cultivation Project of Top Talent in Anhui Colleges and Universities (Grant No. gxbjZD15).
References 1. Alakaam, A.A.H.: International students’ eating habits and food practices in colleges and universities. Campus Support Services Programs, Policies International Students, pp. 99–118, May 2016 2. Bargiota, A., Delizona, M., Tsitouras, A., Koukoulis, G.N.: Eating habits and factors affecting food choice of adolescents living in rural areas. Hormones 12(2), 246–253 (2013)
Examining the Relationship Between Foreigners in China Food Delicacies
1823
3. Asimeng, E., Zhang, S.: Exploring the differences, similarities and the effect between chinese and ghanaian traditional cuisines towards a better appreciation and consumption. In: International Conference on Applications and Techniques in Cyber Intelligence ATCI 2019 Advances in Intelligent Systems and Computing, vol. 1017. Springer, Cham (2020) 4. Sogari, G., Velez-Argumedo, C., Gómez, M.I., Mora, C.: College students and eating habits: a study using an ecological model for healthy behavior. Nutrients 10(12), 1–16 (2018) 5. Investopedia, “R-Squared vs. Adjusted R-Squared: What’s the Difference?,” 2019. https:// www.investopedia.com/ask/answers/012615/whats-difference-between-rsquared-andadjusted-rsquared.asp. Accessed 09 Feb 2020 6. Eberly, L.E.: Multiple linear regression. Methods Mol. Biol. 404, 165–187 (2007) 7. Loeb, S., Dynarski, S., McFarland, D., Morris, P., Reardon, S., Reber, S.: Descriptive analysis in education: a guide for researchers, U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional. Assistance, pp. 1–40, March 2017 8. Statistics Solutions.com: Assumptions of Multiple Linear Regression, Assumptions of Multiple Linear Regression, pp. 1–4 (2016)
Author Index
A An, Dong, 1686 Asimeng, Ernest, 1814 B Bai, Bai, Bai, Bai,
Fan, 501 Lin, 65 Luwei, 1691 Wei, 1221
C Cai, Huali, 1092 Cai, Meifang, 1152 Cai, Xinglei, 671 Cai, Xinlei, 664 Cao, Guozhong, 1056 Cao, Huiteng, 299 Cao, Jianwei, 279 Chang, Wene, 31 Chen, Dongping, 879 Chen, Dongyang, 1315, 1321 Chen, Jian, 760, 1334 Chen, Jianhua, 1604 Chen, Jing, 1714 Chen, Licheng, 1012, 1019, 1027 Chen, Lifang, 1120 Chen, Ruili, 424 Chen, Xiaoyun, 1012, 1019, 1027 Chen, Xingyu, 1172 Chen, Yao, 416 Chen, Yuhong, 1610 Cheng, Zhi, 187 Cui, Baojian, 1610 Cui, Can, 1794 Cui, Guangzhen, 120
Cui, Cui, Cui, Cui, Cui, Cui,
Ping, 1345 Rongyi, 292 Xiaolong, 1510 Yan, 1242, 1247 Yanlin, 671 Yinhe, 1242, 1247
D Deng, Kehui, 1352 Deng, Wei, 800, 807 Ding, Yun, 1282 Dong, Kai, 671 Dong, Tianfang, 494 Dong, Wuzhong, 1138 Dong, Zhenqi, 368 Dou, Huili, 1604 Du, Fang, 563 Du, Fei, 563 Du, Hui, 488 Du, Yuxin, 409 Duan, Rongxia, 728 Duan, Xinyu, 355 E Esther, Asare, 1814 F Fan, Yu, 78 Fan, Yuanchun, 1172 Fan, Yun, 1499 Fan, Zhenning, 538 Fan, Zhifu, 172 Fancheng, Fu, 1639 Fang, Chun, 597 Fang, Jian, 1105
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Atiquzzaman et al. (Eds.): BDCPS 2020, AISC 1303, pp. 1825–1830, 2021. https://doi.org/10.1007/978-981-33-4572-0
1826 Fang, Lili, 1505 Feng, Jing, 1012, 1019, 1027 Feng, Qiong, 1787 Feng, Zhipeng, 259 Fu, Guangjie, 459 Fu, Yazhuo, 1616 G Gao, Jiangjing, 787 Gao, Jing, 722 Gao, Junchai, 1113 Gao, Shang, 259 Gao, Zhengping, 1686 Ge, Shuna, 272 Gong, Chen, 361 Gong, LinLin, 1719 Gong, Qiuquan, 1138 Gu, Chenlin, 285 Guan, Xin, 1327 Guan, Yong, 1686 Guo, Chao, 946 Guo, Feng, 25 Guo, Jiayin, 1179 Guo, Lei, 60 Guo, Li, 774 Guo, Long, 656 Guo, Shuangmei, 265 Guo, Tiantai, 1303 Guo, Yang, 1686 Guo, Yuan, 774 Guo, Zhishuai, 722 H Han, Kun, 578 Han, Li-quan, 1413 Han, Wei, 164 Han, Xueyu, 451 Hao, Miao, 1253 He, Feng, 1378 He, Jiawei, 1303 He, Juan, 402 He, Sisi, 1568 He, Xiangzhen, 671 Hong, Huang, 1772 Hou, Aixia, 99 Hou, Fangbo, 864 Hu, Chaoqiang, 677 Hu, Qingyun, 683 Hu, Wenxiu, 1213 Hu, Xiao, 17 Hu, Xiaowen, 1592 Hu, Zhiqiang, 172 Hua, Liang, 17 Huang, Haijun, 1266
Author Index Huang, Jian, 926, 932 Huang, Shiqi, 570 Huang, Tao, 279 Huang, Tianyang, 1372 Huang, Wei, 851, 1253 Huang, Yingmin, 677, 683 Huang, Yunhao, 1787, 1794 Huang, Zexia, 963 Huang, Zhihua, 279 Huo, Molin, 60 J Jia, Qi, 1334 Jia, Xiujuan, 837 Jia, Yi-zhuo, 1420 Jiang, Haiyan, 340 Jiang, Jian-xin, 1406 Jiang, Jijiao, 1071 Jiang, Kai, 1397 Jiang, Pinzhu, 1477 Jiang, Xin, 1385 Jiang, Yawei, 1092 Jiang, Zhenzhen, 1523 Jianliang, Guo, 1622 Jiao, Sheng-li, 1725 Jin, Huan, 617 Jin, Rui, 228 Jin, Xiu-li, 987 Jin, Zhao, 1691 K Kang, Wen-bin, 1290 Kong, Jie, 318 Kong, Ming, 1303 Kong, Xiangfei, 751 L Lai, Huijuan, 1191 Lei, Lihua, 1303 Lei, Yuefeng, 1260 Leng, Chunxia, 368 Leng, Fuchen, 402 Li, Aizhen, 547 Li, Da, 25 Li, Dapeng, 1787 Li, Guanghui, 1517 Li, Hualiang, 1228, 1235 Li, Li, 743 Li, Mei, 963 Li, Mengyao, 1814 Li, Puying, 376 Li, Rong, 1730 Li, Shangcong, 1206 Li, Shengquan, 683
Author Index Li, Shufang, 1523 Li, Shumeng, 821 Li, Taoan, 894 Li, Wei, 1042, 1105 Li, Wenping, 525 Li, Xin, 1310 Li, Xiufen, 1260 Li, Yang, 1529 Li, Yan-yan, 1406 Li, Yaqian, 1627 Li, Yawei, 1385 Li, Ying, 602 Li, Yongqiang, 1199 Li, ZhenJiang, 994 Li, Zhihong, 1339 Li, Zhuo, 851, 858 Liang, Zheheng, 259 Limei, Song, 1666 Lin, Hongzhen, 1391 Lin, Shan, 938, 946 Lin, Xijun, 259 Lin, Zhiqiang, 1228, 1235 Lin, Zizhi, 1282 Liu, Bo, 285 Liu, Caixi, 459 Liu, Dong, 1787, 1794 Liu, Fuman, 476 Liu, Gang, 326 Liu, Guoyong, 149 Liu, Hao, 517 Liu, Hexuan, 625 Liu, Hong, 1735 Liu, Jiaru, 728 Liu, Jing, 584 Liu, Jun, 1185 Liu, Jusheng, 1385 Liu, Lan, 60 Liu, Lipeng, 794 Liu, Liping, 994 Liu, Lu, 1303 Liu, Shuai, 195, 202 Liu, Wei, 1303 Liu, Weidong, 1540, 1679 Liu, Xiangxiang, 172 Liu, Xiangyu, 689 Liu, Xinyu, 31 Liu, Xuelin, 1633 Liu, Yanbo, 60 Liu, Yang, 1365 Liu, Yi, 1378 Liu, Yihe, 1321
1827 Liu, Ying, 279 Liu, Yong, 632 Liu, Yufan, 858 Liu, Yun, 632 Liu, Yuzhong, 1228, 1235 Liu, Yuzhou, 1012, 1019, 1027 Liu, Zhongfu, 1 Liu, Zhongqiang, 751 Lu, Jiaxin, 1510 Lu, Jie, 172 Lu, Kui, 1365 Lu, Ling, 920 Lu, Ming, 1191 Lu, Quanyi, 602 Lu, Xi, 1397 Lu, Xiaoxue, 113 Lu, Xunlin, 1056 Luan, Jinhua, 52 Luo, Jian-hua, 1420 Luo, Jun, 430 Luo, Xingxian, 787 Luo, Zebin, 1568 Luo, Zhongbao, 901 Lyu, Shiliang, 501 M Ma, Chen, 1510 Ma, Chunmiao, 389 Ma, Hanxu, 1 Ma, Xianyu, 1327 Ma, Xinxin, 1794 Maisuti, Parezhati, 395 Mao, Chunyu, 509, 517 Mei, Qin, 25 Men, Lijuan, 1113 Meng, Bingkun, 389 Meng, Nan, 1808 N Na, Lin, 501 Niu, Yugang, 1084 Niu, Yuxiang, 1064 P Pan, Juan, 383 Pan, Mingjiu, 285 Pan, Yongzhuo, 52 Pang, Xiaoyu, 1221 Pang, Zhaojun, 156 Peng, Lili, 383 Peng, Yongmei, 355
1828 Peng, Yundi, 383 Pu, Jiang, 1297 Pu, Xia, 728 Q Qi, Ding, 1672 Qian, Juan, 1002 Qin, Jingyun, 821 Qin, Tong, 1160, 1166 Qin, Yichen, 1434 Qin, Zhiguang, 38 Qiu, Yiting, 1127 Qu, Chen, 1120 Qu, Lianzhuang, 214 Qu, TianYi, 1166 Qu, Tianyi, 1179
Author Index Sun, Sun, Sun, Sun,
Yacan, 305 Ying, 38 You, 1740 Yue, 1339
R Ran, Xuejiang, 106 Ruyong, Zhang, 1666
T Tan, QingWen, 1546 Tan, Xiaofang, 1499 Tan, Yuzao, 1457 Tan, ZhiGang, 1546 Tang, Chunling, 347 Tang, Fenghua, 1464, 1552 Tang, Liangliang, 259 Tang, Ming, 279 Tang, You, 851, 858, 864 Tao, Heng, 1253 Tao, Lei, 1794 Tian, Cong, 714 Tian, Haoyang, 1056 Tian, Wangqiang, 625
S Sangjin, Zhuoma, 532 Shan, Jun, 285 Shang, Tongfei, 578, 584 Shang, Yanwei, 259 Shen, Anan, 751 Shen, Fei, 1535 Shen, Lifang, 625 Shen, Miao, 134 Shen, Wuqiang, 1132 Shen, Yali, 1228, 1235 Shen, Zhe, 694 Sheng, Li, 1679 Shi, Chunhu, 247 Song, Chao, 1420 Song, Chaoying, 938, 946 Song, Guohua, 555 Song, Ke, 234, 240 Song, Qinghui, 340 Song, Qingjun, 340 Su, Chang, 714 Su, Haiyang, 1540 Su, Kui, 142 Su, Qiang, 538 Sun, Fengjun, 214 Sun, Hao, 1113 Sun, Jianhua, 780 Sun, Jiaxing, 1012, 1019 Sun, Ke, 488 Sun, Lingzhen, 180 Sun, Liping, 707 Sun, Xiaochun, 1610 Sun, Xirui, 901
W Wan, Liyong, 1273, 1358 Wang, BaoLong, 844 Wang, Bin, 656 Wang, Chao, 106 Wang, Dahua, 555 Wang, Dandan, 1558 Wang, Daodang, 1303 Wang, Dinghui, 430 Wang, Dingsheng, 1049 Wang, Gao-xuan, 1563 Wang, Guiping, 901 Wang, Haiyin, 1646 Wang, Hongfen, 886 Wang, Hongyu, 1315 Wang, Jiao, 1510 Wang, Jiaqi, 1787 Wang, Jiayu, 17 Wang, Jinfeng, 265 Wang, Jizhong, 1444 Wang, Li, 1598 Wang, Lu, 689 Wang, Meng, 894 Wang, Nan, 134 Wang, Qi, 292 Wang, Rongxia, 92 Wang, Ruili, 871 Wang, Tao, 1327 Wang, Tie-ning, 1334 Wang, Xi, 1698 Wang, Xiangjun, 1745 Wang, Xiao-li, 142 Wang, Xiaomei, 208
Author Index Wang, Xiaoming, 760 Wang, Xiaozhu, 113 Wang, Xinyue, 482 Wang, Xu, 830 Wang, Yan, 234, 760, 1327 Wang, Yanan, 31 Wang, Yangang, 509 Wang, Ying, 279 Wang, Yongjiang, 851 Wang, Yue, 85 Wang, Yulin, 1751 Wang, Yuying, 334 Wang, Zeyin, 1756 Wang, Zhiguo, 743 Wang, Zhimei, 894 Wei, Na, 547 Wei, XinRan, 1452 Wei, Yong, 1339 Wen, Yongyi, 894 Weng, Manli, 813 Wu, Guangli, 994 Wu, Hanqing, 285 Wu, Jiajie, 1012, 1019 Wu, Jie, 584 Wu, Jiesheng, 1365 Wu, Qiong, 982 Wu, Shuzhou, 1794 Wu, Xia, 610 Wu, Xinyu, 71 Wu, Yanhong, 1766 Wu, Zhongping, 1027 X Xi, Yawen, 52 Xia, Qiongpei, 120 Xia, Weili, 1071 Xia, Wenyue, 1787 Xia, Yuanting, 1213 Xiang, Ming, 1709 Xiang, Yongzhi, 1012, 1019, 1027 Xiao, Chenglin, 1071 Xiao, Jianqiong, 787 Xiao, Linjing, 340 Xiao, Yanqiu, 120 Xiaofang, Tan, 1639 Xie, Hongbin, 52 Xie, Jie, 640 Xie, Linyu, 969 Xie, Wangsong, 221, 1426 Xie, Yu, 459 Xin, Feng, 1658 Xinfu, Wang, 1489 Xing, Liang, 735 Xiong, Tao, 1568
1829 Xu, Baocai, 728 Xu, Cuishan, 677, 683 Xu, Jian, 1568 Xu, Jilei, 340 Xu, Ke, 538 Xu, Peng, 656 Xu, Qian, 1391 Xu, ShuJing, 1703 Xu, Xinke, 1303 Xu, Yiwen, 1568 Xu, Zhicao, 1776 Xue, Zhenghua, 774 Y Yan, Chengmei, 310 Yan, Kechun, 1027 Yan, Liang, 532 Yan, Lixin, 1444 Yan, Qin, 172 Yan, Shi, 142 Yang, Bing, 1253 Yang, Chao, 714 Yang, Chaodeng, 1652 Yang, Chunsong, 1132 Yang, Jingwei, 578, 584 Yang, Jun, 707 Yang, Kai, 1444 Yang, Lu, 389 Yang, Wenbo, 994 Yang, Xianchao, 120 Yang, Xiaodong, 517 Yang, Xiaohui, 1686 Yang, Xiaoming, 914 Yang, Xue, 52 Yang, Yibo, 602 Yang, Yue, 610 Yang, Zhixin, 1228, 1235 Yang, Zhuojuan, 517 Yang, Ziyi, 602 Yao, Jin, 1012, 1019, 1027 Ye, Shuo, 180 Ye, Tianchi, 901 Yin, Han, 1568 Yin, Jing, 45 Yin, Xiyuan, 701 Ying, Miaojing, 31 You, Rentang, 1012, 1019, 1027 Yu, Chang, 1470 Yu, Helong, 858 Yu, Xiu, 1310 Yu, Xulei, 610 Yu, Yang, 9, 1049 Yu, Yingxin, 509 Yu, Yuan, 578
1830 Yu, Zhenfan, 671 Yu, Zhifang, 285 Yuan, Kai, 1138 Yuan, Quan, 1761 Yun, Fan, 1639 Z Zang, Boyu, 956 Zeng, Jijun, 1132 Zhai, Luchen, 590 Zhai, Yujia, 1 Zhang, Anping, 252 Zhang, Bo, 1303 Zhang, Chao, 509 Zhang, Chengmei, 1253 Zhang, Chulei, 127 Zhang, Chunxiao, 751 Zhang, Fengqin, 9 Zhang, Guocai, 38 Zhang, Heqing, 1199 Zhang, Hongjun, 1352 Zhang, Jiameng, 1782 Zhang, Jian, 228, 1523 Zhang, Jiang, 1691 Zhang, Jiansheng, 1113 Zhang, Jinbo, 1132 Zhang, Jinglong, 1378 Zhang, Kaiqiang, 38 Zhang, Kefei, 438 Zhang, Lei, 1098 Zhang, Li, 751 Zhang, Lifang, 1691 Zhang, Lihong, 1145 Zhang, Linghua, 563 Zhang, Linhan, 938 Zhang, Lu, 1483 Zhang, Meiling, 1574 Zhang, Mengyao, 1801 Zhang, Ou, 459 Zhang, Peng, 767 Zhang, Qi, 1691 Zhang, Qingfang, 886 Zhang, Shuai, 625 Zhang, Shunxiang, 1808, 1814 Zhang, Sijia, 1385 Zhang, Song, 1470
Author Index Zhang, Tong, 547 Zhang, Wei, 1160, 1221 Zhang, Weiwei, 1494 Zhang, Xiaofei, 1012, 1019, 1027 Zhang, Xinmin, 538 Zhang, Yanhong, 1580 Zhang, Yao, 214 Zhang, Yijing, 1206 Zhang, Yue, 1413 Zhang, Yuhua, 648 Zhang, Yunrong, 272 Zhang, Zhe, 60 Zhang, Zhen, 120 Zhang, Zhigang, 459 Zhang, Zhiliang, 494 Zhao, Changwei, 538 Zhao, Jianyin, 760 Zhao, Jing, 743 Zhao, Jun, 1303 Zhao, ShuZheng, 1098 Zhao, Weiming, 1586 Zhao, Wenbo, 470, 1079 Zhao, Xiang, 459 Zhao, Yahui, 292 Zheng, Chonghao, 610 Zheng, Di, 285 Zheng, Hong, 368 Zheng, Yi, 714 Zhi, Kanmai, 907 Zhou, Jianming, 767 Zhou, Lu, 1327 Zhou, Ping, 1406 Zhou, Qiang, 1592 Zhou, Xiaoqing, 787 Zhou, Yanfang, 279 Zhou, Zhidan, 430 Zhou, Ziyu, 444 Zhu, Gen, 1290 Zhu, Gongfeng, 259 Zhu, Guangli, 1801 Zhu, Hua, 208 Zhu, Ji, 488 Zhu, Xingping, 977 Zhu, Xiurong, 482 Zhu, Yonghao, 1035 Zou, Hanfeng, 683