146 80 45MB
English Pages 1098 [1093] Year 2021
Lecture Notes on Data Engineering and Communications Technologies 84
Bernard J. Jansen Haibo Liang Jun Ye Editors
International Conference on Cognitive based Information Processing and Applications (CIPA 2021) Volume 1
Lecture Notes on Data Engineering and Communications Technologies Volume 84
Series Editor Fatos Xhafa, Technical University of Catalonia, Barcelona, Spain
The aim of the book series is to present cutting edge engineering approaches to data technologies and communications. It will publish latest advances on the engineering task of building and deploying distributed, scalable and reliable data infrastructures and communication systems. The series will have a prominent applied focus on data technologies and communications with aim to promote the bridging from fundamental research on data science and networking to data engineering and communications that lead to industry products, business knowledge and standardisation. Indexed by SCOPUS, INSPEC, EI Compendex. All books published in the series are submitted for consideration in Web of Science.
More information about this series at http://www.springer.com/series/15362
Bernard J. Jansen Haibo Liang Jun Ye •
•
Editors
International Conference on Cognitive based Information Processing and Applications (CIPA 2021) Volume 1
123
Editors Bernard J. Jansen Qatar Computing Research Institute Doha, Qatar
Haibo Liang School of Mechanical Engineering Southwest Petroleum University Chengdu, China
Jun Ye Hainan University Haikou, China
ISSN 2367-4512 ISSN 2367-4520 (electronic) Lecture Notes on Data Engineering and Communications Technologies ISBN 978-981-16-5856-3 ISBN 978-981-16-5857-0 (eBook) https://doi.org/10.1007/978-981-16-5857-0 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore
Foreword
Cognition has emerged as a new and promising methodology with the development of cognitive-inspired computing, cognitive-inspired interaction, and systems that enable a large class of applications and has developed a great potential to change our life. However, recent advances in artificial intelligence (AI), fog computing, big data, and cognitive computational theory show that multidisciplinary cognitive-inspired computing still struggles with fundamental, long-standing problems, such as computational models and decision-making mechanisms based on the neurobiological processes of the brain, cognitive sciences, and psychology. How to enhance human cognitive performance with machine learning, common sense, natural language processing, etc., is worth exploring. 2021 International Conference on Cognitive-based Information Processing and Applications includes data mining, intelligent computing, deep learning, and all other theories, models, techniques related to artificial intelligence. The purpose of CIPA2021 is to provide a forum for the presentation and discussion of innovative ideas, cutting-edge research results, and novel techniques, methods, and applications on all aspects of technology and intelligence in intelligent computing. At least two independent experts reviewed each paper. The conference would not have been a reality without the contributions of the authors. We sincerely thank all the authors for their valuable contributions. We would like to express our appreciation to all Program Committee members for their valuable efforts in the review process that helped us guarantee the highest quality of the selected papers for the conference. We would like to express our thanks for the strong support of the Publication Chairs, Organizing Chairs, Program Committee members, and all volunteers.
v
vi
Foreword
Our special thanks are also due to the editors of the Springer book series “Advances in Intelligent Systems and Computing”, Ramesh Nath Premnath and Karthik Raj Selvaraj, for their assistance throughout the publication process. Bernard J. Jansen Haibo Liang Jun Ye
Welcome Message
Cognition has emerged as a new and promising methodology with the development of cognitive-inspired computing, cognitive-inspired interaction, and systems, which can enable a large class of applications and has great potential to change our lives. However, recent advances in artificial intelligence (AI), fog computing, big data, and cognitive computational theory show that multidisciplinary cognitive-inspired computing still struggles with fundamental, long-standing problems, such as computational models and decision-making mechanisms based on the neurobiological processes of the brain, cognitive sciences, and psychology. How to enhance human cognitive performance with machine learning, common sense, natural language processing, etc., is worth exploring. 2021 International Conference on Cognitive-based Information Processing and Applications includes precision mining, intelligent computing, deep learning, and all other theories, models, techniques related to artificial intelligence. The purpose of CIPA2021 is to provide a forum for the presentation and discussion of innovative ideas, cutting-edge research results, and novel techniques, methods, and applications on all aspects of technology and intelligence in intelligent computing. At least two independent experts reviewed each paper. The conference would not have been a reality without the contributions of the authors. We sincerely thank all the authors for their valuable contributions. We would like to express our appreciation to all Program Committee members for their valuable efforts in the review process that helped us guarantee the highest quality of the selected papers for the conference. We want to express our thanks for the strong support of the General Chairs, Publication Chairs, Organizing Chairs, Program Committee members, and all volunteers.
vii
viii
Welcome Message
Our special thanks are also due to the editors of the Springer book series “Advances in Intelligent Systems and Computing”, Ramesh Nath Premnath and Karthik Raj Selvaraj, for their assistance throughout the publication process. Jim Jansen Haibo Liang Jun Ye
Organization
Conference Committee Local Organizing Chairs Tao Liao Xiaobo Yin
Anhui University of Science and Technology, China Anhui University of Science and Technology, China
Program Chairs Jim Jansen Haibo Liang Jun Ye
Qatar Computing Research Institute, Qatar Southwest Petroleum University, China Hainan University, China
Publication Chairs Neil Y. Yen Vijayan Sugumaran
University of Aizu, Japan Oakland University, USA
Publicity Chairs Weidong Liu Sulin Pang
Inner Mongolia University, China Jinan University, China
Program Committee Ameer Al-Nemrat Robert Ching-Hsien Hsu Neil Yen Meng Yu
University of East London, UK Chung Hua University, China University of Aizu, Japan The University of Texas at San Antonio, USA
ix
x
Shunxiang Zhang William Liu Mustafa Mat Deris Zaher AL Aghbari Guangli Zhu Raja Al Jaljouli Tao Liao Abdul Basit Darem Xiaobo Yin Vjay Kumar Xiangfeng Luo Jemal Abawajy Ahmed Mohamed Khedr Xiao Wei Sabu M. Thampi Huan Du Shamsul Huda Zhiguo Yan Rick Church Tom Cova Susan Cutter Yi Liu Kuien Liu Wei Xu V. Vijayakumar Abdullah Azfar Florin Pop Kim-Kwang Raymond Choo Mohammed Atiquzzaman Rafiqul Islam Morshed Chowdhury
Organization
Anhui Univ. of Sci. & Tech., China Auckland University of Technology, New Zealand Universiti Tun Hussein Onn Malaysia, Malaysia Sharjah University, UAE Anhui Univ. of Sci. & Tech., China College of Computer Science and Engineering, Kingdom of Saudi Arabia Anhui Univ. of Sci. & Tech., China University of Mysore, India Anhui Univ. of Sci. & Tech., China VIT, India Shanghai Univ., China Deakin University, Australia University of Sharjah, UAE Shanghai Univ., China Indian Institute of Information Technology and Management, India Shanghai Univ., China Deakin University, Australia Fudan University, China UC Santa Barbara, USA University of Utah, USA University of South Carolina, USA Tsinghua University, China Pivotal Inc, USA Renmin University of China, China Professor & Associate Dean, SCSE, VIT Chennai, India KPMG Sydney, Australia University Politehnica of Bucharest, Romania The University of Texas at San Antonio, USA University of Oklahoma, USA Charles Sturt University, Australia Deakin University, Australia
CIPA 2021 Keynotes
Jim Jansen is a Principal Scientist in the social computing group of the Qatar Computing Research Institute. He is a graduate of West Point and has a Ph.D. in computer science from Texas A&M University. Professor Jansen is editor-in-chief of the journal, Information Processing & Management (Elsevier), a member of the editorial boards of seven international journals, and former editor-in-chief of the journal, Internet Research (Emerald). He has received several awards and honors, including an ACM Research Award, six application development awards, and a university-level teaching award, along with other writing, publishing, research, teaching, and leadership honors. Dr. Jansen has authored or co-authored 300 or so research publications, with articles appearing in a multidisciplinary range of journals and conferences. He is author of the book, Understanding Sponsored Search: A Coverage of the Core Elements of Keyword Advertising (Cambridge University Press).
xi
xii
CIPA 2021 Keynotes
Jemal Abawajy is a faculty member at Deakin University and has published more than 100 articles in refereed journals and conferences as well as a number of technical reports. He is on the editorial board of several international journals and edited several international journals and conference proceedings. He has also been a member of the organizing committee for over 60 international conferences and workshops serving in various capacities including best paper award chair, general co-chair, publication chair, vice-chair, and program committee. He is actively involved in funded research in building secure, efficient, and reliable infrastructures for large-scale distributed systems. Toward this vision, he is working in several areas including pervasive and networked systems (mobile, wireless network, sensor networks, grid, cluster, and P2P), e-science and e-business technologies and applications, and performance analysis and evaluation.
Contents
Cognitive-Inspired Computing Fundamentals and Computing Systems Application of Computer Virtual Screening System in Diagnosis and Dispensing of Infertility in Traditional Chinese Medicine and Gynecology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lina Zhao
3
Design of Comprehensive NQI Demand Evaluation System Based on Multi-objective Evolutionary Algorithm . . . . . . . . . . . . . . . . . . . . . . Qi Duan, Chengcheng Li, and Fang Wu
11
Efficiency Analysis of Hospitals Based on Data Envelopment Analysis Method in the Context of Big Data . . . . . . . . . . . . . . . . . . . . . Boyu Lu, Jing Wang, Lin Song, Yongyan Wang, and Jian Zhang
19
Design and Research of Visual Data Analysis Technology in the Study Abroad Career Information System . . . . . . . . . . . . . . . . . . . . . . . Hehong Xiu and Shili Zhou
26
Analysis of Big Data Survey Results and Research on System Construction of Computer Specialty System . . . . . . . . . . . . . . . . . . . . . Xueyan Wang
34
Construction of Online Reading Corpus Based on SQL Server Database Management System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Qiong Wu
43
The Design and Application of College Japanese Reading Teaching System Based on Android . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fangting Liu and Shuang Wang
51
Application of Graphic Language Automatic Arrangement Algorithm in the Design of Visual Communication . . . . . . . . . . . . . . . . Zhengfang Ma
60
xiii
xiv
Contents
Analysis on the Application of BP Algorithm in the Optimization Model of Logistics Network Flow Distribution . . . . . . . . . . . . . . . . . . . . Li Ma
67
Application of NMC System in Design Study Under the Background of Virtual Reality Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Aiyun Yang
75
The Robot Welding Training Assistant System Based on Particle Swarm Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yigang Cui
83
A Systematic Study of Chinese Adolescents Self-cognition Based on Big Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chunyu Hou
91
Financial Management Risk Control Based on Decision Tree Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yuan Li and Juan Chen
98
Analysis and Design of Construction Engineering Bid Evaluation Considering Fuzzy Clustering Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 106 Shanshan Deng and Lijun Zhang Analysis of Fuel Consumption in Urban Road Congestion Based on SPSS Statistical Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Youzhen Lu and Hui Gao Relationship Between Adaptability and Career Choice Anxiety of Postgraduates Based on SPSS Data Analysis . . . . . . . . . . . . . . . . . . . 124 Xi Yang and Youran Li Using the Information Platform System to Simulate the Application of Loco Therapy in the Intervention of Children with Autism . . . . . . . . 135 Yiming Sun Design and Implementation of College Student Information Management System Based on Web . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Yue Yu Design and Development of Distance Education System Based on Computing System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Xiaoxiao Wei, Jie Su, and Lingyi Yin Frequency Domain Minimum Power Undistorted Beamforming Algorithm Based on Full Matrix Acquisition . . . . . . . . . . . . . . . . . . . . . 160 Zhihao Wang, Ying Luo, and Zhaofei Chu Balanced Optimization System of Construction Project Management Based on Improved Particle Swarm Algorithm . . . . . . . . . 169 Yilin Wang
Contents
xv
Study on the Rain Removal Algorithm of Single Image . . . . . . . . . . . . . 179 Junhua Shao and Qiang Li Basketball Action Behavior Recognition Algorithm Based on Dynamic Recognition Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 He Li Simulation of Land Use System Performance Dynamics Based on System Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 Yunzhou Liu Problem Student Prediction Model Based on Convolution Neural Network and Least Squares Support Vector Machine . . . . . . . . . . . . . . 208 Yan Zhang and Ping Zhong Analysis of the Effect of Self-repairing Concrete Under Sulfate Erosion Considering the Rectangular Simulation Algorithm . . . . . . . . . 218 Lijun Zhang and Shanshan Deng Design and Implementation of Tourism Management System Based on SSH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Ping Yang Design and Implementation of Trajectory Planning Algorithm for SCARA 4-DOF Manipulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Hongbo Zhu Advances in Text Classification Based on Machine Learning . . . . . . . . . 239 Desheng Huang Research on Interior Space Design Based on Ant Colony Algorithm . . . 245 Yi Lu Application of Embedded Real-Time Software in Computer Software Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Bin Yang Application of Evolutionary Algorithm in Chinese Word Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Yushan Zhang Cognitive Heuristic Computation of Regional Culture Based on Latent Factor Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Jingxuan Sun Research on Power Marketing Inspection of Power Supply Company Based on Clustering Algorithm and Correlation Analysis . . . . . . . . . . . 267 Wei Xu, Jia Zhao, Yisang Liu, and Xiuyuan Cheng
xvi
Contents
Cognitive-Inspired Computing with Big Data Economic Impact of Big Data Technology on Air Transport Industry . . . 275 Xiangling Cao Analysis and Research on Rural Tourism Development Under Information Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Fei Deng Impact of Enterprise Strategic Mode on Technological Innovation Under Information Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 Can Chen and Xiaofei Zhou Forecast and Analysis of Hotel Occupancy Rate Based on Tourism Data Under Big Data Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 Xiaolu Xu Retention Strategy for Existing Users of Mobile Communications . . . . . 310 Ying Ding Design and Research of Heterogeneous Data Source Integration Platform Based on Web Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 Yaodong Li and Kai Hou Ethics of Robotics Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Kai Li and Zhen Meng The Effectiveness of Technical Analysis in the Era of Big Data . . . . . . . 331 Zhilei Jia Innovation of E-commerce Business Model Based on Big Data . . . . . . . 337 Wenjie Chen Metacognitive Training Mode for English In-Depth Learning from the Perspective of Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Jingtai Li Design and Realization of College Student Management System Under Big Data Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 Di Sun Exchange Rate Forecasting with Twitter Sentiment Analysis Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 Yinglan Zhao, Renhao Li, and Yiying Wang The Construction of Corpus Index in the Era of Big Data and Its Application Design in Japanese Teaching . . . . . . . . . . . . . . . . . . . . . . . . 370 Kun Teng Prediction of Urban Innovation Based on Machine Learning Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Zhengguang Fu
Contents
xvii
Empirical Research on Population Policy and Economic Growth Based on Big Data Analysis Technology . . . . . . . . . . . . . . . . . . . . . . . . . 386 Jin Wang Technological Framework the Precision Teaching Based on Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 Meina Yin and Hongjun Liu Analysis of the Intervention of Yoga on Emotion Regulation Based on Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 Shasha Wang and Yuanyuan Liu Innovation of Employee Performance Appraisal Model Based on Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 Jingya Wang Influencing Factors of Users’ High-Impact Forwarding Behavior in Microblog Marketing Based on Big Data Analysis Technology . . . . . 420 Yunfu Huo and Xiaoru Xue Analysis of the Impact of Big Data Technology on Corporate Profitability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427 Changsheng Bao Cigarette Data Marketing Methods Based on Big Data Analysis . . . . . . 438 Tinggui Li Development of an Information Platform for Integration of Industry-Education Based on Big Data Analysis Technology . . . . . . . 445 Songfei Li, Shuang Liang, and Xinyu Cao Reform of Student Information Management Thinking and Methods Supported by Big Data Technology . . . . . . . . . . . . . . . . . 451 Zhentao Zhao Application of Big Data Technology in Marketing Practice Under the Background of Innovation and Entrepreneurship . . . . . . . . . . . . . . 460 Xia Hua, Jia Liu, and Hongzhen Zhang Computer Aided Design and Optimization of Adsorbent for Printing and Dyeing Wastewater . . . . . . . . . . . . . . . . . . . . . . . . . . . 468 Jia Lin Research on the Development Path of Digital Inclusive Finance Based on Convolutional Neural Network . . . . . . . . . . . . . . . . . . . . . . . . 474 WenHua Li Construction and Application of Virtual Simulation Platform for Medical Education Based on Big Data . . . . . . . . . . . . . . . . . . . . . . . 480 Xiafukaiti Alifu, Nuerbiya Wusuyin, and Maimaiti Yasen
xviii
Contents
Deep Learning Method for Human Emotion Detection and Text Analysis Based on Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486 Shu-yue Zhang Research on the Whole Process Management System Design of Big Data Construction Project Cost Based on Cognitive Inspiration . . . . . . 491 Li Wang Research on Rural Health Care Industry Based on Big Data Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496 Mengmeng Sun and Xiuxia Wang Product Packaging Design Based on Cognitive Big Data Analysis . . . . . 502 Li Yaxin, Lu Zheng, and Zhang Fan AI-Assisted Cognitive Computing Approaches Establishment of Economic Term Bank Under the Background of Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513 Lingzhi Hu Development and Construction of Traditional Apparel Customization App Under the Background of Artificial Intelligence . . . . . . . . . . . . . . . 521 Li Wang Application of AI Technology in Modern Dental Equipment . . . . . . . . . 530 Zongyuan Ji, Zhaohua Song, Zheng Lu, and Jianyou Zeng The Application of Artificial Intelligence Technology in the Field of Artistic Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 Sisi Feng Application of BIM+VR+UAV Multi-associated Bridge Smart Operation and Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544 Yu Peng, Yangjun Xiao, Zheng Li, Tao Hu, and Juan Wen Intelligent Dispatching Logistics Warehouse System Method Based on RFID Radio Frequency Data Processing Technology . . . . . . . . . . . . 552 Zhe Song Smart Travel Route Planning Considering Variable Neighborhood Ant Colony Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562 Gang Zhao Intelligent Translation Strategy Based on Human Machine Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571 Xiaohua Guo Innovation of Accounting Industry Based on Artificial Intelligence . . . . 580 Jinwei Zhang
Contents
xix
Risk Analysis of the Application of Artificial Intelligence in Public Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587 Min Kuang Equipment Fault Diagnosis Based on Support Vector Machine Under the Background of Artificial Intelligence . . . . . . . . . . . . . . . . . . . 596 Lina Gao and Lin Zhang Integration of Artificial Intelligence and Higher Education in the Internet Era . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604 Meijuan Yuan Diagnostic Study on Intelligent Learning in Network Teaching Based on Big Data Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612 Xiaoguang Chen and Fengxia Zhang Design of Online OBE Theoretical Knowledge Sharing Based on the Support of Intelligent System Analysis Method . . . . . . . . . . . . . . 619 Jinsheng Zhang Intelligent Learning Ecosystem of Information Technology Courses Oriented Skills Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628 Beibei Cao Innovation and Development of Environmental Art Design Thinking Based on Artificial Intelligence in Culture, Form and Function . . . . . . . 635 Jing Hu and Ling Fu Power Grid Adaptive Security Defense System Based on Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643 Lijing Yan, Feng Gao, Yifan Song, and Huichao Liang Innovative Mode and Effective Path of Artificial Intelligence and Big Data to Promote Rural Poverty Alleviation . . . . . . . . . . . . . . . 652 Jie Su, Xiaoxiao Wei, Lingyi Yin, and Jingmeng Dong The Intelligent Service Mode of University Library Based on Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660 Yan Zhang Analysis on the Current Situation of Intelligent Informatization Construction in University Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669 Min Zhang Design and Implementation of Auxiliary Platform for College Students’ Sports Concept Learning Based on Intelligent System . . . . . . 678 Bo He and Juan Zhong
xx
Contents
Application of Big Data Analysis and Image Processing Technology in Athletes Training Based on Intelligent Machine Vision Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687 Juan Zhong and Bo He Design of a Smart Elderly Positioning Management System Based on GPS Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694 Qianqian Guo Design and Implementation of Enterprise Public Data Management Platform Based on Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . 702 Zhongzheng Zhao and Xiaochuan Wang Smart Micro-grid Double Layer Optimization Scheduling of Storage Units with Smog Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 711 Xiaojie Zhou, Zhenhan Zhou, Rui Yang, and Yang Xuan Research on Indoor Location Algorithm Based on Cluster Analysis . . . 723 Fenglin Li, Haichuan Wang, Jie Huang, Hanmiao Shui, and Junming Yu Construction of Tourism Management Information System Based on Django . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731 Ping Yang Analysis and Research of Artificial Intelligence Technology in Polymer Flooding Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737 YinPing Huo Application Research of 3D Ink Jet Printing Technology for Special Ceramics Based on Alumina Ceramics . . . . . . . . . . . . . . . . . . . . . . . . . . 743 Guozhi Lin Research and Practice of Multiphase Flow Logging Optimization and Imaging Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 749 Dawei Wang Research on the Statistical Method of Massive Data by Analyzing the Mathematical Model of Feature Combination of Point Data . . . . . . 755 Yueyao Wu In Graphic Design - Design and Thinking from Plane to Screen . . . . . . 760 JieLan Zhou Design of Urban Rail Transit Service Network Platform Based on Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767 Caifeng Yu Research and Implementation of Parallel Genetic Algorithm on a Ternary Optical Computer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 772 Hengzhen Cui, Junlan Pan, Dayou Hou, and Xianchao Wang
Contents
xxi
Mathematical Modeling of CT System Parameters Calibration and Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 780 Defang Liu, Jia Zhao, Xianchao Wang, and Xiuming Chen Design and Implementation of Information Digest Algorithm on a Ternary Optical Computer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 788 Junlan Pan, Qunli Xie, Jun Wang, Hengzhen Cui, Jie Zhang, and Xianchao Wang Design and Implementation of SM3 Algorithm Based on a Ternary Optical Computer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 796 Junlan Pan, Henzeng Cui, Defang Liu, and Xianchao Wang U-Net Medical Image Segmentation Based on Attention Mechanism Combination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805 Tao Liu, Beibei Qian, Ya Wang, and Qunli Xie Exploration and Practice on Blended Teaching of Mathematical Analysis with Information Technology . . . . . . . . . . . . . . . . . . . . . . . . . . 814 Jie Zhang, Mian Zhang, and Jian Tang Internet of Cognitive Things Clinical Misunderstandings of Enterprise Precision Marketing Under the Background of Wireless Network . . . . . . . . . . . . . . . . . . . . . 825 Long Lu, Qinhong He, and Xiangdong Xu Analysis of the Relationship Between Production and Economy Based on the Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834 Lingyan Meng Innovation of English Course Network Learning Model Based on Literature Data Mining Technology . . . . . . . . . . . . . . . . . . . . . . . . . 842 Junning Li Early Warning Mechanism of Network Public Opinion Based on Big Data for Mass Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 850 Nan Zeng and Dengxin Dong Mobile Internet Product Usage Scenarios and User Experience Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 857 Qi Zhang Student Management System Based on Intelligent Technology of Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 866 Ying Zeng and Nisa Boontasorn Comparative Analysis of Machine Translation and Human Translation Under the Background of Internet . . . . . . . . . . . . . . . . . . . 877 Hongxia Dai
xxii
Contents
Application of Distributed High-Precision Data Acquisition System Based on GPRS Wireless Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883 Fengling Fang The Design of Network Learning Space and Its Application in Japanese Online Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 891 Yu Jin and Xiaoling Yu Design of Japanese Interactive Network Teaching Platform in Information Age . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 898 Xiaoling Yu, Xin Liu, Xu Gao, and Yu Jin Development Trend of Big Data Finance Based on BP Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907 Zhenshan Li Design and Implementation of School-Enterprise Cooperation Information Service Platform Based on Mobile Internet Technology . . . 915 Caiyun Gao Application Status and Countermeasures of Mobile Internet in Sports Lottery Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 923 Yanping Wu The Application of Internet Technology and the Study of Tibetan Local Chronicles in the Qing Dynasty . . . . . . . . . . . . . . . . . . . . . . . . . . 930 Fei Cheng The Impact of Travel Information Service Experience on Traveling Decisions in the Era of Mobile Internet . . . . . . . . . . . . . . . . . . . . . . . . . 940 Lei Zhang The Application of Multimedia Network Technology in the Autonomous Learning of University Students’ Speech . . . . . . . . . 948 Lei Guo Network Sub-communication Circle and the Educational Mode of College Student Community in the Internet Era . . . . . . . . . . . . . . . . 955 Yunshan Liu Impact of Big Data on the Governance of Religious Public Opinion in the Internet Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 963 Yang Luo Construction and Implementation of Financial Shared Service System in the “Internet +” Era . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 971 Shimiao Cheng Smart City and Smart Stadium Construction Under the Background of Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 979 Shunqiu Li and Zhong Li
Contents
xxiii
Construction of Information Literacy Education System in Application Oriented Universities Under the Internet Environment . . . . 986 Yuan Gao Design of Distance Network Teaching Platform Based on Information Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994 Jiankun Wu Data Statistics of Tourism Economy Network Attention Survey in the Internet Era . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1003 Xuan Lyu The Production and Application of Sports Micro-class in Higher Vocational Education Based on Mobile Internet . . . . . . . . . . . . . . . . . . 1011 Ming Li and Chongwei Li The Application of Mobile App to College Oral English Teaching in the Context of Internet + . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1021 Yang Xin Image Recognition of Agricultural Plant Diseases and Insect Pests Based on Convolution Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . 1030 Junwen Lai Pedestrian Recognition Algorithm Based on HSV Model and Feature Point Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1036 Yuan Luo, Wei Qian, Yao Xiao, and Xiangkai Deng Research on the Design of Online Travel Service Recommendation System Based on Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1042 Juanxiu Xu and Juanling Xu Research and Implementation of Web Front End Development System Based on Microservice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1047 Yuan Yang How to Realize the Integration of Database System and Web in Information System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1053 Fei Zhao Network Design and Implementation Based on Azure PAAS . . . . . . . . . 1058 Mei Liu and Jinghua Zhao UHF RFID Physical ID Mobile Operation Terminal . . . . . . . . . . . . . . . 1065 Rui Guo, Junjie Liu, Shu Shi, Jinhai Li, and Juan Du Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1073
Cognitive-Inspired Computing Fundamentals and Computing Systems
Application of Computer Virtual Screening System in Diagnosis and Dispensing of Infertility in Traditional Chinese Medicine and Gynecology Lina Zhao1,2(B) 1 Henan Province Hospital of Traditional Chinese Medicine, Zhengzhou 50002, Henan, China
[email protected] 2 The Second Affiliated Hospital of Henan University of Traditional Chinese Medicine,
Zhengzhou 450002, Henan, China
Abstract. With the development of computer technology, researchers have applied computer technology to the field of traditional Chinese medicine and developed a computer virtual screening system. This technology has applied the results and technology of modern scientific research, which has caused great changes in the methods and theories of drug discovery. In recent years, the computer virtual screening system has been introduced in many hospitals for the diagnosis and dispensing of infertility in traditional Chinese medicine. The significance and limitations of the computer virtual screening system in TCM gynecological infertility diagnosis and dispensing were summarized, and the application of the computer virtual screening system in TCM gynecological infertility diagnosis and dispensing was investigated by questionnaire survey method. First A and Second A hospitals are not very familiar with the computer virtual screening system, about 50%, which may be related to the hospital equipment. The percentage of first-class and second-class hospitals introducing computer screening systems is relatively small, about 29%, while the third-class hospitals have introduced more computer systems. This is caused by the uneven distribution of resources in hospitals, and the uneven distribution of doctor resources in first-A and second-A hospitals. This makes the computer virtual screening system have certain obstacles in the application. Keywords: Virtual screening · Traditional Chinese medicine · Chinese medicine · Chinese medicine dispensing · Gynecological infertility
1 Introduction With the economic growth and the continuous improvement of people’s quality of life, the prevalence rate has only increased [1, 2], especially for women’s gynecological diseases. Western medicine has a fast curative effect on the treatment of the disease, but the side effects are large [3, 4]. For this reason, Chinese medicine has become the choice of more and more people, and Chinese medicine treatment and physical therapy © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 3–10, 2022. https://doi.org/10.1007/978-981-16-5857-0_1
4
L. Zhao
have become more and more important [5, 6]. Researchers began to integrate computer technology into the field of traditional Chinese medicine, and developed a computer virtual screening system to screen traditional Chinese medicines. The emergence of this system has reduced a lot of work in the diagnosis and dispensing of infertility in traditional Chinese medicine and gynecology [7, 8]. In the research on the application of the computer virtual screening system in the diagnosis and dispensing of Chinese medicine and gynecological infertility, many scholars have conducted research on it and achieved good results. For example, MartínezÁlvarez, the research has included more than 56,000 compounds in the Chinese natural product database. The molecules were molecularly connected to the seven relative target enzymes in tubal obstruction infertility, and the initial ligand score function was used as a threshold, and 10 molecules were selected. The interaction between small molecule compounds and target enzymes is very strong, and these compounds can be used as lead compounds for the further research of novel obstructive infertility treatment drugs [9]. Han Y S’s research shows that, according to different calculation models, 2% to 19% of the new lead compounds discovered through computer virtual drug screening technology have been proved to be effective in subsequent experiments. So far, 49 new receptors have been discovered using Ligands computer drug virtualization technology, and its success rate is 99 to 999 times higher than the method based on empirical testing [10]. This article researches the application of computer virtual screening system in TCM gynecological infertility diagnosis and dispensing. Firstly, the literature research method and quantitative analysis method are used to investigate the importance and limitations of computer virtual screening system in TCM and TCM gynecological infertility diagnosis and dispensing. In summary, the questionnaire survey method is used to investigate the application of the computer virtual screening system in the diagnosis and dispensing of infertility in Chinese medicine and gynecology.
2 Research on Computer Virtual Screening System and Chinese Medicine Gynecological Infertility Diagnosis and Dispensing 2.1 Research Methods (1) Literature research method The main purpose of the literature research method is to broadly refer to that scholars can obtain research data through various methods such as various books, newspapers, magazines, and electronic readings that they need to consult, and generate scientific research ideas and inspirations. Its greatest advantage is that we can understand the historical changes and development history of the object to be studied from our own sources, and understand the historical changes of the object to be studied. By comparing with related materials, we can prompt us to have a more comprehensive understanding of the scope and objects of the research. (2) Investigation and research method The questionnaire survey method is that this article conducts a survey through pre-prepared questions and analyzes the answers of the interviewees to draw the
Application of Computer Virtual Screening System
5
necessary conclusions. Through the design of questionnaires, to understand the computer virtual screening system and the use of traditional Chinese medicine and traditional Chinese medicine for gynecological infertility diagnosis and dispensing. (3) Quantitative analysis Qualitative analysis is related to quantitative analysis. Quantitative analysis refers to the analysis of mathematical hypothesis determination, data collection, analysis and testing, and it is also quantitative and qualitative. Qualitative analysis refers to the qualitative analysis of the research object. It refers to the process of conducting research based on subjective understanding and qualitative analysis, through research and bibliographic analysis. 2.2 Application of Computer Virtual Screening System in Gynecological Diagnosis and Dispensing (1) Application of computer virtual screening system to screen the active multifunctional components of traditional Chinese medicine for the treatment of obstructive tubal infertility According to data, infertility female factors accounted for 62%, of which tubal blockage accounted for the majority of women, and fallopian tube blockage is the main cause of infertility. The computer virtual screening system is used to screen the active and multifunctional components of traditional Chinese medicine for the treatment of obstructive fallopian tube infertility, and the traditional Chinese medicine is better used in the treatment of medicine. 2.3 The Process of the Computer Virtual Screening System in the Diagnosis and Dispensing of Infertility in TCM and TCM Gynecology (1) According to the existing ancient books and the treatment practice of senior clinical experts, select a traditional Chinese medicine or clinically proven compound as the research target. (2) Search out the chemical composition of all known compounds in the current Chinese medicine or formula from the resource library and literature, and determine the specific screening target. (3) In addition to genetic diseases, please read the literature to find out the various target proteases for the diseases that need to be studied, and then download the threedimensional structure of the three target proteins from the PDB resource library. If the three-dimensional structure of the target protein is not resolved, the homology can be obtained the sexual modeling method establishes its three-dimensional structure. (4) If the three-dimensional structure of the target related to the disease is known, the molecular binding method can be used to make the correlation between the active ingredient and the target enzyme through energy mapping and geometric mapping for interactive research with higher scores.
6
L. Zhao
(5) If a specific model of the pharmacodynamic structure or active pharmacopoeia related to a specific disease is obtained, the pharmacopoeia search method can be used for drug design, that is, the pharmacopoeia model and 3D can be proposed by studying the three-dimensional structure-activity relationship of a series of drugs. The chemical functional groups closely related to the biological activity in the regulatory space are used as limited conditions for searching 3D small molecule databases to create lead compounds with new structural properties. (6) According to the results of virtual control of the drug by the computer, the chemical components that need further research were determined, and the biological activity test was carried out. If you are not satisfied, please continue the pre-clinical research process to develop a new drug, or return to number one. 2.4 Limitations of Computer Virtual Screening System (1) The computer virtual medical drug sorting technology is only an auxiliary method for the scientific innovation research of modern medicine and traditional Chinese medicine. It can only provide some useful method ideas and clues for future experimental research. In addition, these technologies themselves need to be further improved. Therefore, the technology of virtual medical drug screening cannot be regarded as a “magic field”. Any problem can be solved by using it. On the contrary, it should be fully evaluated and used reasonably, and it should be truly developed into a powerful weapon for exploring the mysteries of Chinese medicine. (2) There is a lack of a database containing a large number of Chinese herbal medicine components. So far, only 9% of Chinese herbal medicines have been studied in depth, and the chemical components of most herbal medicines are not well understood. As for the number of chemical components collected in the three databases, none of them exceeded 19,000. Compared with commercial databases containing tens of millions of chemicals, it is “small and insignificant” and takes a long time to be fully updated. (3) The virtual medical drug screening technology is not complete on the computer. For a method for evaluating the effectiveness of ligands, the existing scoring method has its own shortcomings and limitations, and high score points are likely to be poor ligands. At the same time, various scoring functions have specific applicable professions, and the specific scoring functions are very likely to be sensitive only to some types of compounds or similar structures, and the effect in the evaluation is not very good. Therefore, they will appear false negatives and false positives at the same time. False negatives are likely to prompt researchers to spend a lot of time and energy on cleaning and separating inert compounds. False negatives are likely to cause researchers to miss the discovery of high-quality drugs. (4) In the process of virtual screening, the computer only fully considers the interaction between the receptor and the ligand, rather than the mechanism of the complex function of the drug. If we need to further improve the success rate of the pct technology in the innovation research of traditional Chinese medicine, in addition to detecting the interaction between the ligand and the receptor, the characteristics of the type of drug, the toxicity, the chemical stability and the difficulty of molecular synthesis, and the drug in the body In addition to the generation kinetic characteristics, it also
Application of Computer Virtual Screening System
7
includes the distribution of transportation and metabolism. This is because even though these target compounds have strong human affinity, their comprehensive utilization degree, in vivo metabolism or toxicology test failed, which basically means the failure of the entire drug development plan. 2.5 Algorithm of Computer Virtual Screening System (1) TSP-STW algorithm Suppose there are n − 1 service nodes, which are represented by 1, …, n − 1, and the starting node is O. Any set of service nodes S can be represented as S. Define T{k}, bucket as starting from node O, serving S {1, …, nl}, serving each node in s, and finally reaching node k (the shortest time required. When the number of nodes is 1), that is, S = 1, get the starting value of T(k, s): T(k, {k}) = Max{T0 + s0 + t0k , ak } k = 1 . . . . . . n − 1
(1)
Among them: T is the moment of departure from the starting point, T = 0 s-the service time at the starting point, s = 0 t-the time required to go directly from node 0 to node k; When the number of nodes in S is greater than 1, that is, S > 1, let k be the last served node, and p be the node served immediately before node k. There is the following recurrence formula: T(k, s) = Min{Max{T(p, S − {k}) + sp + tpk , ak } k = 1, . . . . . . , n − 1
(2)
3 Research on the Application Status of Computer Virtual Screening System in Chinese Medicine and Traditional Chinese Medicine Gynecological Infertility Diagnosis and Dispensing 3.1 Research Purpose With the increasing improvement of computer virtual screening system, many hospitals have introduced and used it. This paper investigates the application status of computer virtual screening system in TCM and Chinese medicine gynecological infertility diagnosis and dispensing, based on the analysis of the results, and introduces the computer virtual screening system. 3.2 Questionnaire Survey (1) Number of questionnaires According to the minimum sample size formula in statistics, the author sets the confidence level of the questionnaire to 80%, and the allowable error does not exceed 8%. Calculate the minimum sample size as ta 2 1.645 2 = = 120 (3) n0 = 2p 2X 0.075 That is, the minimum sample size of this questionnaire is 120 copies.
8
L. Zhao
(2) Questionnaire distribution and collection A number of TCM hospitals were randomly selected in this city, and the hospitals were divided into first class, second class, and third class for questionnaire distribution. The number of distribution in each type of hospital was 30, 40, and 50 respectively.
4 Data Analysis This article uses a questionnaire survey to understand the hospital gynecologist’s understanding of the computer virtual screening system. The results are shown in Table 1: Table 1. The degree of understanding of the computer virtual screening system A hospital
Second class hospital
Top three hospitals
Unfamiliar
50%
46%
30%
General
28%
30%
42%
Familiar with
22%
24%
28%
It can be seen from Fig. 1 that the first-class and second-class hospitals are not very familiar with the computer virtual screening system, about 50%, which may be related to the hospital equipment. This article uses a questionnaire to investigate the introduction of the hospital’s computer virtual screening system in the TCM and TCM gynecological infertility diagnosis and dispensing. The results are shown in Table 2.
The degree of understanding of the computer virtual screening system
percentage
80% 60% 40%
50% 46%
42% 30%
28% 30%
28% 22% 24%
20% 0% unfamiliar
general
Familiar with
degree A hospital
Second class hospital
Top three hospitals
Fig. 1. The degree of understanding of the computer virtual screening system
Application of Computer Virtual Screening System
9
Table 2. Introduction of computer virtual systems
Introduced
A hospital
Second class hospital
Top three hospitals
20%
24%
58%
Introducing
30%
36%
30%
Not introduced
50%
40%
12%
Introduction of computer virtual systems 12%
situation
Not introduced
40% 30% 36% 30%
Introducing Introduced -10%
50%
58%
24% 20% 0%
10%
20%
30%
40%
50%
60%
70%
80%
percentage Top three hospitals
Second class hospital
A hospital
Fig. 2. Introduction of computer virtual systems
It can be seen from Fig. 2 that the percentage of first-A and second-A hospitals introducing computer screening systems is relatively small, about 29%, while the thirdA hospitals have introduced more computer systems. This is caused by the uneven distribution of resources in hospitals, and the uneven distribution of doctor resources in first-A and second-A hospitals. This makes the computer virtual screening system have certain obstacles in the application.
5 Conclusions In short, because a large number of new technologies have been widely used in our country, traditional scientific research methods and models must not be able to fully adapt to the development of new technologies in this period. We are facing the challenges of the new era in the 21st century, and we urgently need to adapt to the new situation, adapt to the research ideas, and actively apply the new generation of science and technology to the main research work to solve the problem in a breakthrough. Like all new things, the practical application of computer-based virtual drug screening technology in traditional Chinese medicine has just started, and the research ideas are still immature. However, as long as we always adhere to the correct research direction and are unwilling to engage in research in the so-called fashion and sports field, we may remain optimistic. A fact that we can foresee is that with the continuous deepening of scientific research in this
10
L. Zhao
field, a new world of scientific research in Chinese medicine will surely be created in our country.
References 1. Kearnes, S., Pande, V.: ROCS-derived features for virtual screening. J. Comput. Aided Mol. Des. 30(8), 609 (2016) 2. Aguirre-Alvarado, C., Segura-Cabrera, A., Velázquez-Quesada, I., et al.: Virtual screeningdriven repositioning of etoposide as CD44 antagonist in breast cancer cells. Oncotarget 7(17), 23772–23784 (2016) 3. Wingert, B.M., et al.: Optimal affinity ranking for automated virtual screening validated in prospective D3R grand challenges. J. Comput. Aided Mol. Des. 32(1), 287–297 (2018) 4. Chen, Y., Yu, Z., Liu, J.: Research on the application of computer information management system in project cost prediction. In: Journal of Physics: Conference Series, vol. 1744, no. 2, p. 022092 (5pp) (2021) 5. Hannings, A.N., Waldner, T.V., Mcewen, D.W., et al.: Assessment of emergency preparedness modules in introductory pharmacy practice experiences. Am. J. Pharm. Educ. 80(2), 23 (2016) 6. Motomizu, S., Hakim, L., et al.: Computer-controlled mobile chemical analysis systems and their application to multi-component analysis in environmental samples. Bunseki Kagaku 68(6), 357–372 (2019) 7. Yubao, Q., Weifeng, Q., Jiayi, L., et al.: Application of virtual simulation and computer technology in experiment and practical teaching. Revista de la Facultad de Ingenieria 32(2), 450–459 (2017) 8. Xie, B., Zhang, S.: Application of machine learning in computer vision and cancer bioinformatics. In: Journal of Physics: Conference Series, vol. 1648, no. 2, p. 022007 (7pp) (2020) 9. Peñaranda, C., Valero, S., Julian, V., Palanca, J.: Application of genetic algorithms and heuristic techniques for the identification and classification of the information used by a recipe recommender. In: Martínez-Álvarez, F., Troncoso, A., Quintián, H., Corchado, E. (eds.) HAIS 2016. LNCS (LNAI), vol. 9648, pp. 201–212. Springer, Cham (2016). https://doi.org/10. 1007/978-3-319-32034-2_17 10. Khoussainov, B., Liu, J.: Decision problems for finite automata over infinite algebraic structures. In: Han, Y.-S., Salomaa, K. (eds.) CIAA 2016. LNCS, vol. 9705, pp. 3–11. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-40946-7_1
Design of Comprehensive NQI Demand Evaluation System Based on Multi-objective Evolutionary Algorithm Qi Duan1(B) , Chengcheng Li2 , and Fang Wu1 1 Quality Management Branch, China National Institute of Standardization,
Beijing 100191, China 2 CNPC Beijing Richfit Information Technology Co., Ltd., Beijing 100007, China
Abstract. Aiming at the problems such as insufficiency in NQI multi-element integration and fusion and focusing on the new demands towards difference elements in national quality infrastructure in the process of transformation, an NQI comprehensive fuzzy comprehension evaluation system is designed by multidimension analysis towards NQI research accomplishment. The analysis and evaluation towards NQI demand are also designed for field of engineering machinery manufacturing industry. Experimental results indicate that the designed system is of high efficiency in evaluation, stability in evaluation results, and feasibility. Keywords: NQI · NQI demand · Multi-objective evolutionary algorithm · System design · Fuzzy comprehension evaluation
1 Introduction National quality infrastructure is a collective name for the quality mechanism framework for development and implementation by a nation of standards, measurement, certification and verification, and tests and experiments [1]. Since the concept of “national quality infrastructure” was proposed in 2015, United Nations Conference on Trade and Development (UNCTAD), World Trade Organization (WTO), International Organization for Standardization (ISO), World Bank and other organizations have conducted series of researches and reports on theories, elements and other aspects of NQI, and promoted it for practice [2, 3]. Over recent years, China has conducted a series of key projects about NQI generic technology research and application, and achieved a group of accomplishments in NQI theory and technology innovation. However, there’s still a large distance with developed countries, especially in theory of NQI multi-element accomplishment integration and fusion, coordinated service approaches and paths, and innovation in new framework and service model. It is a critical science issue for NQI studies of China that demand prompt solution. Among them, no research on NQI industrial demand evaluation is found. In aspect of comprehensive evaluation approach, the most common approaches at current stage include evaluation system design approach based on evidential reasoning, © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 11–18, 2022. https://doi.org/10.1007/978-981-16-5857-0_2
12
Q. Duan et al.
evaluation system design approach based on Hall three dimensions structure model, evaluation system design based on distance measure, etc. These are applied to issue evaluation in urban planning [4], harbor construction [5], talent [6, 7], and computing system design [8–10] demand. The common feature of these approaches is to develop theoretical models, confirm evaluation dimensions and indicators, and finally realize the goal of evaluation by integration of quantitative and qualitative approaches. These approaches all have the issues of large error, low accuracy, low evaluation efficiency, etc. Based on such facts, this study proposed fuzzy comprehension NQI demand evaluation system design approach based on multi-objective evolutionary algorithm. This approach constituted overall structure of the fuzzy comprehension NQI demand evaluation system with expert pool module, comprehensive evaluation module, database module and system maintenance module. Experimental results indicated that the designed system was of high efficiency in evaluation, and reasonability and feasibility in evaluation results.
2 System Hardware Design Design approach of the NQI demand comprehensive demand evaluation system based on multi-objective algorithm adopted visualized program design language Dephi 5.0 as development tool, and used Windows 7 as system platform for establishment of NQI demand comprehensive evaluation system. System structure of the NQI demand comprehensive evaluation system based on multi-objective evolutionary algorithm is shown in Fig. 1. 2.1 Database Module In evaluation of NQI demand for industries, collection of evaluation data was related to final result and quality of the evaluation. Major sources for data collection are the followings: (1) Recalling of third-party data; (2) Collection of public data; (3) Score by experts. 2.2 Fuzzy Comprehension Evaluation Module Evaluation results were obtained with NQI demand evaluation approach realized with fuzzy comprehension evaluation. Process of the first-stage fuzzy comprehension evaluation required several times of iteration in fuzzy comprehension evaluation module. The NQI demand fuzzy evaluation system based on multi-objective evolutionary algorithm designed a subroutine to replace the process of first-stage fuzzy comprehension evaluation to facilitate repeated invocation by the system. Program framework is shown in Fig. 2:
Design of Comprehensive NQI Demand Evaluation System
13
Start Calculate membership degree rij establish evaluation matrix R =
( rij ) n⋅m
Bi = Ar , Ri = (bi1 , bi 2 ,..., bnm ) Component sum of
Bi is 1
Electronic copy of written data Fuzzy comprehension evaluation module
Expert pool module
Manual operation by expert
NQI demand comprehensive evaluation system
System maintenance module
Normalize
Bi
Invalid evaluation
Response to invalidity
Third-party invocation
Database module
Public data
Score by expert
Fig. 1. NQI demand comprehensive evaluation
Evaluation result obtained
End
Fig. 2. Framework of evaluation program system structure
2.3 Expert Score Module Under the condition of limited available document and data, evaluation to current status of NQI for industries needed to invite experts for standard, measurement and qualification verification of different industries to make effort on management and analysis on current status of evaluation industry and then check reasonability of NQI demand of the industry. 2.4 System Maintenance Module To avoid access by non-authorized users into the NQI demand fuzzy comprehension evaluation while guaranteeing system and data security, the system was configured with passport protection function. Common users were only granted with authorization for browsing. Expert users were authorized for scoring and browsing. Only administrative staff with password could conduct maintenance and other operation towards the NQI demand fuzzy comprehension evaluation.
3 NQI Demand Fuzzy Comprehension Evaluation Based on Multi-objective Evolutionary Algorithm 3.1 Establishment of NQI Demand Fuzzy Comprehension Evaluation Model Applicable standard, measurement, verification and qualification for the evaluation subject could be used as factors to establish evaluation indicator set. The NQI demand fuzzy comprehension evaluation system design approach based on multi-objective evolutionary algorithm system obtained evaluation standard according to demand and features of the evaluation objective, and establish evaluation indicator set for NQI demand fuzzy comprehension evaluation. The NQI demand fuzzy comprehension evaluation system design approach based on multi-objective evolutionary algorithm categorized evaluation factor sets to obtain
14
Q. Duan et al.
first-stage and second-stage evaluation indicator set as well as evaluation indicator set U, including four first-stage indicators of standard, measurement, test and qualification. Its expression formula is as below: U = {U1 , U2 , U3 , U4 }
(1)
Further categorization was conducted towards first-stage indicator set of the NQI demand fuzzy comprehension evaluation to obtain second-stage evaluation indicator set. ⎧ U = {U11 , U12 , . . . , U1n } ⎪ ⎪ ⎪ 1 ⎪ ⎨ U2 = {U21 , U22 , . . . , U2m } (2) ⎪ U3 = U31 , U32 , . . . , U3p ⎪ ⎪ ⎪ ⎩ U4 = U41 , U42 , . . . , U4q Based on features of different NQI demand fuzzy comprehension evaluation indicator and expert advice, evaluation remark set was established to analyze actual status of NQI demand fuzzy comprehension evaluation, and divided NQI demands into four levels of “important/urgent”, “important/not urgent”, “urgent/ not important” and “not important/not urgent”. The NQI demand fuzzy comprehension evaluation system design approach based on multi-objective evolutionary algorithm used analytic hierarchy process to calculate weight of NQI demand fuzzy comprehensive evaluation factor. Following the high to low principle, different kinds of evaluation indicators were categorized into different layers. The evaluation indicators were then classified according to layers, systems and features of evaluation objectives. Indicators in the same layer were compared. Comparison results were applied with standardization process according to importance level to obtain weight of NQI demand fuzzy comprehension evaluation indicator. Actual steps are the followings: (1) After comparison on indicators at the same layer, judgement matrix A was established. Its expression is as below: ⎤ ⎡ a11 a12 . . . a1n ⎢ a21 a22 . . . a2m ⎥ ⎥ (3) A=⎢ ⎣ a31 a32 . . . a3p ⎦ a41 a42 . . . a4q Judgment matrix was normalized by columns: bij =
aij 4
(4)
aij
i=1
Judgment matrix was normalized by lines: vij =
4 i=1
bij
(5)
Design of Comprehensive NQI Demand Evaluation System
15
Assuming wi represents weight of NQI demand fuzzy comprehension evaluation indicator; its calculation formula is as below: wi = bij
vi 4
(6) vi
i=1
Weight vector matrix W was established based on the calculated weight of NQI demand fuzzy comprehension evaluation indicator. (2) Consistency represented by judgement matrix with high orders was judged with consistency check. Membership degree of evaluation indicators in evaluation indicator set was calculated according to collected evaluation data. Fuzzy matrix U 4j was thus established: ⎡ ⎤ U11 , U12 , ..., U1n ⎢ U , U , ..., U ⎥ 2m ⎥ ⎢ 21 22 (7) Uij = ⎢ ⎥·W ⎣ U31 , U32 , ..., U3p ⎦ U41 , U42 , ..., U4q Where, U ij refers to membership degree of evaluation indicator U i to evaluation remark set U j . The calculated membership degree was then used for establishment of fuzzy comprehension evaluation model F. Its express is as below: F=
wi × Uij ×δ Uj
(8)
Where, δ refers to fuzzy operator. 3.2 Model Solution The NQI demand fuzzy comprehension evaluation system design approach based on multi-objective evolutionary algorithm adopted multi-objective evolutionary algorithm to solve the fuzzy comprehension evaluation model about NQI demand, and thus realized fuzzy comprehension evaluation for NQI demand. Euclidean distance between two weight vectors was also calculated. According to the calculation result, T vectors were selected as neighbors for weight vectors. Assuming B(i) = {i1 , i2 , . . . , iT } (i = 1, 2, . . . , N ) λi1 , λi2 , . . . , λir refers to the closest T weight vectors in surroundings of equally distributed weight vector λi . After population x1 , x2 , . . . , xi was initialized, assuming that F(xi ) = λi1r F B(i). Population P was classified. Three subpopulations of IA , IB , IC were thus obtained. Subpopulation IA was made to involve ζ1 individuals; subpopulation IB was made to involve ζ2 individuals, and; subpopulation IC was made to involve ζ3 individuals. ζ1 , ζ2 , ζ3 were assumed to satisfy the followings at the beginning stage: ζ1 = ζ2 = ζ3 =
N 3
(9)
16
Q. Duan et al.
The NQI demand fuzzy comprehension evaluation system design approach based on multi-objective evolutionary algorithm entered dynamic cooperative differential evolution with dynamic subpopulation approach. (1) Obtaining new filial generation individual yi . (2) Reference point update. (3) Calculation on evolution success rate of different strategy. τi =
ki /ζi 3
(10)
(ki /ζi )
i=1
(4) Secondary calculation on population size: ζi = N × τi
(i = 1, 2, 3)
(11)
ζi was updated. (5) Termination condition G > Gmax was set. Algorithm was then stopped. Result of NQI demand fuzzy comprehension evaluation model was output, and NQI demand fuzzy comprehension evaluation was thus completed.
4 Results and Analysis To verify effectiveness of NQI fuzzy comprehension evaluation system design approach based on multi-objective evolutionary algorithm, it was necessary to conduct test on NQI fuzzy comprehension evaluation system design approach based on multi-objective evolutionary algorithm. The test adopted OS of Windows 10, and GPU of NVIDIA GeForce GT740. Three approaches of NQI demand fuzzy comprehension evaluation system design approach based on multi-objective evolutionary algorithm, NQI demand evaluation system design approach based on evidential reasoning, and NQI demand demand evaluation system design approach based on Hall three dimensions structure model were respectively tested to compare evaluation time cost of the three approaches. The results are shown below: It could be known from Table 1 that NQI demand fuzzy comprehension evaluation system design approach based on multi-objective evolutionary algorithm cost less time than the other 2 model approaches in evaluation of NQI demand. As the NQI demand fuzzy comprehension evaluation system design approach based on multiobjective evolutionary algorithm designed a subprogram in the fuzzy comprehension evaluation module to replace the process of fuzzy comprehension evaluation, repeated invocation by system was facilitated, evaluation time cost was saved, and evaluation efficiency of NQI demand fuzzy comprehension evaluation system design approach based on multi-objective evolutionary algorithm was promoted. To thoroughly analyze statistical performance of the system designed by this research, an evaluation was conducted to NQI demand of machinery manufacture enterprises. Major steps are shown as follows:
Design of Comprehensive NQI Demand Evaluation System
17
Table 1. Evaluation time cost of approaches Time of iteration Algorithm of this paper Evidential reasoning Hall three dimensions system design approach structure model system design approach 1
1.75
2.32
2.23
2
1.83
2.02
2.26
3
1.54
3.62
3.35
4
1.27
3.44
3.03
5
1.52
3.28
3.54
6
1.33
3.72
3.56
(1) Selection of evaluation objective enterprises; (2) The system invoked materials about enterprises and its industry from the database to obtain corresponding data for update; (3) Calculation on evolution success rate of different strategy; (4) Relevant evaluation parameters and indicators were obtained; (5) Evaluation results accomplished.
5 Summary NQI demand evaluation is an effective tool to understand actual needs of national quality infrastructure and bottleneck issues for industry development. It is also an important reference for different kinds of quality development planning and policies. This study focuses on NQI demand evaluation, and designs an efficient and stable NQI demand fuzzy comprehension evaluation system based on multi-objective evolutionary algorithm. The system is capable to complete evaluation on NQI demand of certain industry in a relatively short period. It can be also applied to an experimental verification test for machinery manufacturing industry, which lays the foundation for NQI demand analysis for other industries and promotion of this system. Acknowledgements. This article was funded by the National Key Research and Development Program 2016YFF0204203.
References 1. European Commission. A strategic vision for European standards: Moving forward to enhance and accelerate the sustainable growth of the European economy by 2020. http://eur-lex.eur opa.eu/legal-content/EN/TXT/PDF/?URI=CELEX:52011DC0311&from=EN 2. Lundqvist, B.: Standardization Under EU Competition Rules and US Antitrust Laws: The Rise and Limits of Self-regulation, pp. 149–183. Edward Elgar Publishing (2014) 3. Taylor, D.A.J.: Is ISO 14001 Standardization in tune with sustainable development-symphony or cacophony. J. Envtl. L & Liting 13, 509 (1998)
18
Q. Duan et al.
4. Ren, H., Du, Y., Chen, Y., et al.: Research on the three-dimensional evaluation model of ecological city planning scheme from the perspective of distance measurement. Sci. Technol. Progress Policy 33(16), 81–85 (2016) 5. Liang, C., Yao, Y., Xu, D., et al.: Modeling selection of Arctic call port based on fuzzy comprehensive evaluation method. J. Central China Normal Univ. Nat. Sci. Ed. 51(6), 817–824 (2017) 6. Liu, Q., Zheng, H.: Research on the evaluation of the effect of flexible flow of university talents based on fuzzy comprehensive evaluation. J. Weifang Eng. Vocat. Coll. (6), 41–47 (2019) 7. Zhao, X., Zhou, Y.: Evaluation of MOOC teaching quality of college physics based on fuzzy comprehensive evaluation method. Res. High. Eng. Educ. (1), 190–195 (2019) 8. Bao, T., Wang, Y., Meng, L., et al.: Design and implementation of multi-index evaluation system based on evidence reasoning. Comput. Eng. Sci. 38(6), 1269–1274 (2016) 9. Liu, H., Li, S.: Application of fuzzy comprehensive evaluation for smartphone evaluation modeling. Comput. Eng. Appl. 52(1), 224–228 (2016) 10. Xu, Y., Zhang, H., Chen, L.: A fuzzy comprehensive UGC evaluation method based on sentiment analysis—taking Taobao commodity text review UGC as an example. Inf. Theory Pract. 39(6), 64–69 (2016)
Efficiency Analysis of Hospitals Based on Data Envelopment Analysis Method in the Context of Big Data Boyu Lu1 , Jing Wang1 , Lin Song1 , Yongyan Wang1 , and Jian Zhang2(B) 1 Graduate School, Tianjin University of Traditional Chinese Medicine, Tianjin, China 2 School of Management, Tianjin University of Traditional Chinese Medicine, Tianjin, China
Abstract. With the rapid development of information technology, how to use big data for effective analysis has become a hot research topic in various industries. The use of DEA can effectively deal with the bias problem caused by the functional relationship between variables. This study takes various types of hospitals in Fuzhou as the research object, and aims to make an evaluation of the operational efficiency. We use the DEA method to handle the input-output data of various types of hospitals publicly released by the Fuzhou City Bureau of Statistics. The results show that the scale efficiency of various types of hospitals in Fuzhou City is on the whole decreasing trend, and the blind expansion of the scale of comprehensive hospitals has led to the decrease of efficiency. This study proves that DEA is an effective way to deal with big data. Keywords: Big data · Data envelopment analysis · Efficiency analysis
1 Introduction With the development of information technology, data has spread to every corner, and big data has become a research hotspot in various industries. Because of its volume, variety, velocity and value, how to handle big data is a new challenge for industries. From the characteristics of big data, it is known that there may be more redundant data in big data sets, and the traditional big data processing methods do not consider the functional relationship between variables, which can have incalculable negative impact on data analysis. Big data sets contain not only the data values themselves, but also their functional relationships, and it is essential to consider the functional relationships in the data set as they may bias the analysis results if they are not considered. The Data Envelopment Analysis (DEA) method is a non-parametric method for assessing efficiency proposed by Charles et al. [1] and has relative advantages in dealing with the effective evaluation of multiple inputs and outputs. Xiaojian Zhou et al. have started to use DEA methods to deal with big data [2]. XinPu Wang et al. evaluated the technological innovation efficiency of 21 big data companies using the DEA-BCC model [3]. The method has published more than 10,000 studies since its introduction, © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 19–25, 2022. https://doi.org/10.1007/978-981-16-5857-0_3
20
B. Lu et al.
assessing the performance of different types of entities and productive activities, including the health care sector [4], and has been widely used in the health care business [5]. The use of DEA method can effectively deal with the bias problem caused by the functional relationship between variables. DEA is used to evaluate the efficiency of decision-making units (DMU) without predicting the functional relationship between variables and without setting weights in advance. The weights obtained by DEA method allow us to understand the functional relationships between variables, eliminate redundant values, and reduce the quantity of data without changing the quality of data, which is an effective way to handle large amounts of data. In previous studies, most scholars only studied the efficiency among similar hospitals. In this paper, we use 12 types of hospitals in Fuzhou City as the research object and use the DEA method to process the data, which helps to conduct a deeper study on hospital efficiency by comparing the difference of hospital efficiency among different types.
2 Data and Model 2.1 Data Source The database is established by querying the relevant input and output data of various hospitals in the statistical yearbook publicly released by the Fuzhou Bureau of Statistics. These hospitals are General Hospital (H1), Chinese Medicine Hospital (H2), Chinese and Western Medicine Hospital (H3), Infectious Disease Hospital (H4), Psychiatric Hospital (H5), Tuberculosis Hospital (H6), Oral Hospital (H7), Ophthalmology Hospital (H8), Children’s Hospital (H9), Orthopaedic Hospital (H10), Beauty Hospital (H11), and Cancer Hospital (H12). 2.2 Selection of Indicators When selecting input and output indicators for hospitals, it is necessary to take into account both the scientific nature and the availability and operability of data. In China, Na Li et al. used the number of actual beds, the number of working staff, and the number of medical equipment units over 10,000 yuan as input indicators, and the number of consultations and total discharges as output indicators [6]. Suyan Li et al. used the number of actual beds, fixed assets, and health technicians as input indicators, and the number of outpatient visits, discharges, and total revenue as output indicators [7]. Xin Sun et al. used the number of health institutions, beds, and health care inputs as input indicators, and the number of discharges and bed days as output indicators [8]. Wei Wang et al. selected the number of health technicians and the actual number of beds as input indicators, and the number of outpatient and emergency visits and inpatient surgical visits as output indicators [9]. In this paper, the number of doctors and beds are used as input indicators, and the number of consultations and discharges are used as output indicators, taking into account the relevant data publicly released by Fuzhou City Bureau of Statistics and the requirements of DEA method for the number of indicators and the number of research subjects.
Efficiency Analysis of Hospitals
21
2.3 Selection of Model In this paper, DEA-Solver-LV software is used to conduct the data analysis. In choosing the model, Charles et al. proposed a fixed payoff of scale model (CRS) as a technical efficiency (TE) analysis model, based on which Bank et al. further proposed a variable payoff of scale model (VRS) through their study [10], which combined with the CRS model can measure pure technical efficiency (PTE) and scale efficiency (SE), and can derive the desired study the scale payoff state of the object under study. The technical efficiency analysis the production efficiency of different research objects under specified conditions, and can make an overall evaluation of each research object; the pure technical efficiency reflects the efficiency of each research object due to the different management and technical levels; the scale efficiency reflects the gap between the current scale and the optimal scale of each research object. In this paper, we first choose the CRS model to analyse the technical efficiency, and then choose the VRS model to further analyse the pure technical efficiency and scale efficiency, so that the analysis results are more objective and convincing.
3 Results 3.1 Result of DEA Analysis
Table 1. Comprehensive technical efficiency of 12 types of hospitals in Fuzhou, 2014–2017 DMU 2014 2015 2016 2017 Average Rank H1
0.624 0.539 0.521 0.567 0.563
10
H2
0.535 0.546 0.602 0.727 0.603
8
H3
0.608 0.552 0.534 0.606 0.575
9
H4
0.570 0.515 0.543 0.557 0.546
11
H5
0.348 0.333 0.358 0.480 0.380
12
H6
0.738 0.725 0.733 0.772 0.742
5
H7
1.000 1.000 1.000 1.000 1.000
1
H8
1.000 1.000 1.000 1.000 1.000
1
H9
1.000 1.000 1.000 1.000 1.000
1
H10
0.750 0.573 1.000 0.524 0.712
6
H11
0.824 0.514 0.539 0.810 0.672
7
H12
0.800 0.740 0.745 0.782 0.767
4
The Table 1 shows that from 2014 to 2017, the hospitals with the highest annual average overall technical efficiency ranking in Fuzhou are Oral Hospital, Ophthalmology Hospital and Children’s Hospitals, and the lowest three categories of hospitals are General Hospital, Infectious Disease Hospitals and Psychiatric Hospital.
22
B. Lu et al. Table 2. Pure technical efficiency of 12 types of hospitals in Fuzhou, 2014–2017 DMU 2014 2015 2016 2017 Average Rank H1
1.000 1.000 1.000 1.000 1.000
1
H2
1.000 1.000 1.000 1.000 1.000
1
H3
0.908 0.890 0.883 0.894 0.894
8
H4
0.572 0.523 0.547 0.629 0.568
11
H5
0.358 0.344 0.376 0.572 0.412
12
H6
0.757 0.733 0.737 0.860 0.772
9
H7
1.000 1.000 1.000 1.000 1.000
1
H8
1.000 1.000 1.000 1.000 1.000
1
H9
1.000 1.000 1.000 1.000 1.000
1
H10
1.000 1.000 1.000 1.000 1.000
1
H11
0.897 0.647 0.544 1.000 0.771
10
H12
1.000 1.000 1.000 1.000 1.000
1
The Table 2 shows that from 2014 to 2017, the highest annual average pure technical efficiency ranking in Fuzhou are General hospital, Chinese Medicine Hospital, Oral Hospital, Ophthalmology Hospital, Children’s Hospital, Orthopaedic Hospital and Cancer Hospital. The lowest three categories are Beauty Hospital, Infectious Disease Hospital, and Psychiatric Hospital. Table 3. Size efficiency of 12 types of hospitals in Fuzhou, 2014–2017 DMU 2014 2015 2016 2017 Average Rank H1
0.624 0.539 0.521 0.567 0.563
12
H2
0.535 0.546 0.602 0.727 0.603
11
H3
0.669 0.620 0.604 0.677 0.643
10
H4
0.995 0.984 0.992 0.886 0.964
4
H5
0.973 0.968 0.952 0.838 0.933
6
H6
0.975 0.988 0.994 0.897 0.963
5
H7
1.000 1.000 1.000 1.000 1.000
1
H8
1.000 1.000 1.000 1.000 1.000
1
H9
1.000 1.000 1.000 1.000 1.000
1
H10
0.750 0.573 1.000 0.524 0.712
9
H11
0.919 0.796 0.990 0.809 0.879
7
H12
0.800 0.740 0.746 0.782 0.767
8
Efficiency Analysis of Hospitals
23
The Table 3 shows that from 2014 to 2017, the highest annual average size efficiency ranking in Fuzhou are Oral Hospital, Ophthalmology Hospital and Children’s Hospital. The lowest three categories are Chinese and Western Medicine Hospital, Chinese Medicine Hospital and General Hospital. 3.2 Size Earnings Growth Status
Table 4. Size revenue status of 12 types of hospitals in Fuzhou, 2014–2017 DMU
2014
2015
2016
2017
The number of “IRS”
The number of “DRS”
The number of “–”
H1
DRS
DRS
DRS
DRS
0
4
0
H2
DRS
DRS
DRS
DRS
0
4
0
H3
DRS
DRS
DRS
DRS
0
4
0
H4
IRS
IRS
–
IRS
3
0
1
H5
IRS
IRS
–
IRS
3
0
1
H6
DRS
IRS
–
IRS
2
1
1
H7
–
–
–
–
0
0
4
H8
–
–
–
–
0
0
4
H9
–
–
–
–
0
0
4
H10
IRS
IRS
–
IRS
3
0
1
H11
IRS
IRS
–
IRS
3
0
1
H12
DRS
DRS
DRS
DRS
0
4
0
“IRS”: Incremental Returns to Scale; “DRS”: Diminishing Returns to Scale; “–”: No Change in Returns to Scale.
The Table 4 shows that from 2014 to 2017, the hospitals with more years of decreasing returns to scale in Fuzhou are General Hospital, Chinese Medicine Hospital, Chinese and Western Medicine Hospital and Cancer Hospital (all 4), only one Tuberculosis Hospital, and no decreasing returns to scale for other types of hospitals; the hospitals with more years of constant returns to scale are Oral Hospital, Ophthalmology Hospital, and Children’s Hospitals (all 4); the hospitals with increasing returns to scale hospitals with more years of increasing returns to scale are Infectious Disease Hospital, Psychiatric Hospital, Orthopaedic Hospital, and Beauty Hospital (all 3), and Tuberculosis Hospital (all 2), and there is no increasing returns to scale for other types of hospitals.
24
B. Lu et al.
4 Discussion 4.1 Overall Downward Trend in Scale Efficiency Through a longitudinal comparison of the scale efficiency of 12 types of hospitals from 2014 to 2017, it is found that except for the scale efficiency of Dental Hospital, Ophthalmology Hospital, and Children’s Hospital, which remained unchanged, the scale efficiency of the other 9 types of hospitals showed an overall decreasing trend, which indicates that the majority of hospitals in Fuzhou have not reached the optimal scale and are in a state of inappropriate scale. 4.2 General Hospitals are not Very Efficient In recent years, more and more people choose to go to larger general hospitals regardless of the severity of the disease, which has caused the problem of blindly expanding the scale and decreasing the efficiency of general hospitals. After specific data analysis, this paper also finds that when analyzing the input-output efficiency of 12 types of hospitals in Fuzhou City using the number of doctors and beds as input indicators and the number of consultations and discharges as output indicators, general hospitals invest much higher human and material resources than other types of hospitals, and their number of doctors has increased by 1310 from 2014 to 2017, although their number of consultations and discharges number of people is also higher compared to other types of hospitals, but it is ranked lower in terms of scale efficiency and overall technical efficiency, which indicates that general hospitals invest too much and need to reduce their inputs appropriately to improve efficiency. 4.3 Uneven Distribution of Health Resources The results of the analysis of pure technical efficiency show that more than half of the 12 types of hospitals have high pure technical efficiency, but other types of hospital don’t have high technical efficiency, indicating that the management and technical level still needs to be improved. It also shows that human and technical resources are not very evenly distributed among all types of hospitals in Fuzhou. Excellent talents and good infrastructure are essential for the development of health care, and other types of hospitals should also improve their own salary and hospital hardware facilities in order to attract and retain talents and improve their input-output efficiency.
5 Conclusion This paper proposes the use of DEA method to analyse and process big data, which reduces the learning time and improves the prediction accuracy of the model in the process of processing big data, making the prediction better. Both theoretical and empirical analyses show that the use of DEA method can ensure prediction accuracy, and also greatly shorten the processing time of data, and improve the efficiency of processing big data. However, DEA method also has its limitations, it can only measure the relative validity of DMU. There is still a long way to go to solve the big data problem, and we hope that the introduction of this paper can provide some reference.
Efficiency Analysis of Hospitals
25
References 1. Charnes, A., Cooper, W.W., Rhodes, E.: Measuring the efficiency of decision making units. Eur. J. Oper. Res. 2(6), 429–444 (1978) 2. Zhou, X., Chen, X.: A Prediction method based on big data fusion DEA and RBF. Stat. Decis. Making 36(22), 36–39 (2020). (in Chinese) 3. Wang, X., Zang, M.: Evaluation on technology innovation efficiency of big data enterprises based on DEA. J. Risk Anal. Crisis Response 9(3), 145–148 (2019) 4. Emrouznejad, A., Yang, G.: A survey and analysis of the first 40 years of scholarly literature in DEA: 1978–2016. Socioecon. Plann. Sci. 61, 4–8 (2018) 5. Mitropoulos, P., Mitropoulos, I., Sissouras, A.: Managing for efficiency in health care: the case of Greek public hospitals. Eur. J. Health Econ. 14(6), 929–938 (2013) 6. Li, N., Li, M., Yang, W.: Evaluation of operational efficiency of tertiary general hospitals in Shanxi Province based on DEA model. China Hosp. 24(06), 21–23 (2020). (in Chinese) 7. Li, S., Shan, L., An, R.: DEA-based efficiency evaluation of tertiary hospitals in Qingdao. Mod. Trade Ind. 41(08), 93–95 (2020). (in Chinese) 8. Sun, X., Zou, S., Ding, H.: Evaluation of input and output efficiency of health and health care in Anhui Province based on DEA perspective. Anhui Med. 41(09), 1095–1099 (2020). (in Chinese) 9. Wang, W., Pan, J.: A study on the efficiency of 14 divisional hospitals in Xinjiang production and construction corps based on DEA model. China Health Econ. 32(07), 78–80 (2013). (in Chinese) 10. Banker, R.D., Charnes, A., Cooper, W.W.: Some models for estimating technical and scale inefficiencies in data envelopment analysis. Manag. Sci. 30(9), 1078–1092 (1984)
Design and Research of Visual Data Analysis Technology in the Study Abroad Career Information System Hehong Xiu(B) and Shili Zhou College of International Education, Bohai University, Jinzhou, Liaoning, China
Abstract. Big data analysis mainly involves two different fields. One is how to store a large amount of data, and the other is how to process a large number of different types of data in a short time, that is, to solve the problems of big data storage and big data processing. Career planning for international students in China requires in-depth exploration of the value of big data, content analysis and calculation of big data, deep learning and knowledge calculation are the basis of big data analysis, visualization is both the key technology of data analysis and the presentation of key technology data analysis results. Based on the research of big data analysis technology, this paper puts forward the countermeasures for the career planning of international students in China, and proposes to improve the career planning channels through the information system and improve the informatization level of career planning. Keywords: Big data · Analysis technology · Career planning · Research
1 Introduction International students in China refer to non-Chinese students who have registered in higher education institutions in China and hold ordinary passports for the purpose of study, including non-academic education and academic education. Career refers to the development process of occupations, duties and positions that a person has continuously engaged in throughout his life. Scientific career planning can promote the growth of studying in China, and it is also a reflection of the quality of higher education institutions. These successful international students in China will return to China and contribute to the development of their own country. It will not only attract more international students to come. Chinese learning and exchanges are more conducive to the integration and convergence of world cultures, and have important practical significance for the promotion of the development of higher international education [1]. At present, career planning education of Chinese students in colleges and universities has become mature, but the career education for international students in China has just started, and there is a lack of corresponding policy support. Some international students in China have not considered what job they will pursue after graduation, and plan to take a © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 26–33, 2022. https://doi.org/10.1007/978-981-16-5857-0_4
Design and Research of Visual Data Analysis Technology
27
step-by-step approach; some international students want to stay in China for employment but do not know how to enter the Chinese workplace; some have taken many detours when starting business. All have affected the quality of education for international students in China. Through the research of this article, we will solve the key problems of the education of international students in China, help them find employment and start their own business, improve the international competitiveness of the international students, promote the all-round development of the international students in China, and realize the value of life of the international students in China.
2 Choice Theory of Career Planning John Holland is a professor of psychology at Johns Hopkins University and a well-known career guidance expert in the United States. John Holland’s career interest theoretical model is shown in Fig. 1. Realistic type (R)
Investigative type (I)
Conventional type (C)
Artistic type (A)
Enterprise type (E)
Social type (S)
Indicate high consistency Indicate moderate consistency Indicate distinction
Fig. 1. John Holland’s theoretical model of vocational interest
3 Development Theory of Career Planning Career planning is a dynamic process. In addition to the factors that match personality with career, different career development stages also have a greater impact on career choice. People’s professional psychology is always in a dynamic development process, so the matching of personality and occupation cannot be completed in one go. Processoriented theory is to study people’s professional behavior and development stage from dynamic perspective. Donald E. Super is a representative professional management
28
H. Xiu and S. Zhou
scientist in the United States. His career development stage theory is a longitudinal career guidance theory, focusing on the research of personal career orientation and career choice process itself. Donald E. Super’s theory of stages of career development includes the complete development process of lifetime and divides career development into five stages, as shown in Table 1. 3.1 Growth Stage The characteristic of this stage is that you begin to think about the future, gradually have a certain ability to control life, gain basis for competent work, and at the end of this stage, you become more and more aware of and concerned about the future. Through school learning and social activities, I can understand myself, understand the meaning of the world and work, and initially establish a good attitude towards life. Main tasks: identify and establish self-concept, be dominant in professional curiosity, and gradually and consciously cultivate professional ability. Table 1. Career development stages Career stages
Youth 14–25 years
Adult 25–45 years
Mid-life 45–65 years
Agedness 66 years
Growth stage
Develop suitable self-concept
Learn to build relationships with others
Accept your own limitations
Develop non-professional roles
Exploratory stage
Learn from many opportunities
Find desirable job opportunity
Identify new problems and solve them
Find suitable retirement
Establishment stage
Start in selected field
Devote to selected job
Develop new coping skills
Engage in unfinished dreams
Maintenance stage
Determine choice made now
Devote to maintaining work stability
Consolidate self-defense against competition
Maintain work fun
Exit stage
Reduce leisure time Reduce physical Focus on activity time necessary activities
Reduce work hours
3.2 Exploratory Stage This stage is professional identification. Individuals have a preliminary range of career choice and prepare for education or practice. Deepen the understanding of occupation and work, and embody learning achievements and practical experience into professional orientation, and implement it initially. Main tasks: self-inspection, role identification and career exploration through school study, and complete job selection and preliminary employment.
Design and Research of Visual Data Analysis Technology
29
3.3 Establishment Stage Individual begins to determine their position throughout their lives and increases their role as family caregivers, works stably in the face of challenges, and learns to balance family and career. If you find a new goal, you need to re-evaluate yourself. Main tasks: obtain a suitable work area and seek development. This stage is the core part of most people’s career cycles. 3.4 Maintenance Stage Individuals has found suitable field and strives to maintain his achievements in this field. The main changes are positions, jobs, and units, not career changes. Individuals should consolidate their existing positions and strive for improvement. Main tasks: develop new skills, maintain the achieved achievements and social status, maintain the harmonious relationship between family and work, and find replacement candidates. 3.5 Exit Stage The focus of this stage is gradually shifting from work to family and leisure, arranging retirement and starting retirement life, and seeking new points of satisfaction spiritually. Main tasks: gradually withdraw and end careers, develop social roles, reduce rights and responsibilities, and adapt to life after retirement.
4 Decision Theory of Career Planning Career decision-making is the individual’s choice of the occupation to be pursued, which integrates self-knowledge and judgments of external factors such as education and occupation. It is an individual’s reaction when facing a career decision. Harren categorizes most people’s career decisions into three categories: first is rational. Advocating logical analysis, weighing the pros and cons of each option based on the system’s collection of sufficient self and environmental information, and making the best decision. Second is intuitive. Make decisions directly based on your feelings or emotional reactions in a specific situation. Decisions are based on feelings, rather impulsive, and rarely collect information systematically. Third is dependent. Waiting for or relying on others to make decisions for themselves, relatively passive, pay attention to the opinions and expectations of others, and use social evaluation and social norms as the decision criteria. CASVE cycle is a career planning decision-making theory, which includes five stages: communication, analysis, synthesis, evaluation and execution, as shown in Fig. 2. 4.1 Communication Including internal communication and external communication, and realize the gap between ideal and reality through communication. Content communication includes emotional signals and physical signals. External communication includes parents’ inquiries about career planning, colleagues and friends’ evaluation of careers.
30
H. Xiu and S. Zhou Communication Identify existent problems Execution Take action to solve problems
CASVE Loop
Value Sort options
Analysis Consider all possibilities
Synthesis Form possible choice
Fig. 2. CASVE loop
4.2 Analysis It is through thinking, observation and research to analyze knowledge of interests, abilities, values and personality, as well as various environmental knowledge, to further understand the gap between the actual state and the ideal state. The analysis stage also needs to link various factors with relevant knowledge. 4.3 Synthesis It is mainly to synthesize and process the information from the previous stage to formulate an action plan to eliminate the gap. According to the information obtained in the analysis stage, the method of expanding and narrowing the selection list is adopted. First, the selection range is expanded, and then gradually reduced, and finally 3–5 most likely options are determined. 4.4 Value Perform specific evaluations on the occupations obtained in the comprehensive stage, evaluate the possibility of obtaining the occupation, and the influence of this choice on oneself and others, so as to rank. The best option comes first, the next best option comes second, and so on. 4.5 Execution It is the last part of the entire CASVE cycle, and it is also the stage of implementing choice, turning thinking into action. The previous steps have determined the most suitable occupation. If you want to achieve career success, you also need to put all your ideas into practice in the execution stage. 4.6 More Loop CASVE cycle is a constantly repeating process. After the execution stage, the career decision-maker returns to the communication stage to determine whether the chosen option is the best choice, and whether it can effectively eliminate the gap between ideal and reality.
Design and Research of Visual Data Analysis Technology
31
5 Countermeasures on Career Planning for International Students in China Based on the selection theory, development theory and decision-making theory of career planning, in view of the situation of career planning for international students in China, combined with the author’s actual teaching experience, and with relevant literature, this article proposes the following strategies for career planning for international students in China. 5.1 Strengthen Construction of Career Planning Curriculum System In the future, career planning courses for international students in China will be incorporated into a unified teaching plan, enriching the form and content of teaching, and enhancing the pertinence and effectiveness of teaching content [2]. Design teaching content for students’ grades. For lower grades, the focus is on adaptive training and planning awareness education, establishing professional ideals, learning various professional test scales, introducing relevant consulting websites, popularizing occupational classification methods, and extensively participating in social practice. Apply basic knowledge in practice to experience the charm of career planning, and initially determine career goals [3]. For senior students, the focus is to explain employment policies, help analyze personal characteristics, combine the employment situation, guide the formulation of development plans, focus on employment services, provide employment information, and carry out employment skills guidance training. According to the actual needs of the talent market, timely adjust the teaching plan. In terms of curriculum system and content, focus on pertinence and practicality, and keep up with market demand to make a good connection. 5.2 Improve Self-awareness and Ability to Analyze External Environment International students in China should fully understand their own abilities, correctly understand themselves, figure out what they want to do and what they don’t want to do, and focus on analyzing their own conditions, especially their personality and hobbies. It is necessary to carry out targeted training in combination with professional ideals, and develop a professional personality compatible with professional ideals. Self-awareness is an important basis for improving career planning. It is necessary to discover your own shortcomings, be good at listening to others’ opinions, and have a correct positioning of yourself [4]. The external environment is an important place for professional practice, providing a place of work and opportunities for further development. When making career plans for international students in China, they must fully consider the external environment, the changes in the external environment in the next few years, and the favorable and unfavorable factors of the external environment. Special attention is paid to the analysis of the organizational environment, the development strategy of the unit, the needs of human resources and the future promotion space. International students in China need to constantly understand themselves and improve their environmental analysis ability, find the right career development direction, and formulate development plans.
32
H. Xiu and S. Zhou
5.3 Build High-Quality Group of Professional Planning Teachers Career planning education guides students to develop in all-round way, with the goal of achieving career success. It is a highly professional teaching and practical activity. Teachers who are engaged in career planning education for international students in China should start from the professional awareness and employment concept of international students in China, and provide guidance throughout the process for international students in China. This puts forward higher requirements for teachers who are engaged in the career education of international students in China. They have a high sense of responsibility, are willing to make dedication, and care and love for international students in China. It also requires a high level of business, a systematic mastery of career planning theory, a complete knowledge system, and relevant knowledge of education and sociology. Teachers can carry out lecture activities, learn from each other, supervise each other, and make progress together [5]. At the same time, they must be good at making accurate judgments based on specific situations, have a high sense of political responsibility and professionalism, and have a good professional ethics. Schools should encourage these teachers to further study, encourage the qualification of career planners, and continuously improve the level of academic qualifications and professional knowledge. 5.4 Pay Attention to Cultivation of Practical Ability for Career Planning Comprehensively enhance the employment competitiveness of international students in China, it is far from enough to rely solely on career planning courses, and social practice based on career goals is also required. Since at the university stage, the focus of students is still on learning theoretical knowledge on campus, and their spare-time life often revolves around campus, and there are not many opportunities for contact with society and work [6]. Due to factors such as language knowledge and cultural differences, international students in China have fewer opportunities to contact Chinese society. Therefore, through school-enterprise cooperation, establish a long-term cooperative relationship, help international students to perceive the working environment in advance, and provide a platform and channel for international students to contact the society through internships and practical training. On the one hand, international students in China can take this opportunity to accumulate work experience and adapt to social life and the pace of work in advance; on the other hand, through this channel, international students in China can further judge whether the target occupation is really suitable for them and find out the shortcomings. Through a wide range of social practices, it has supplemented and promoted the career planning education for international students in China. 5.5 Open Up New Channels for Career Planning Education Countries with better career planning education have a very complete employment service system, establishing an advanced career planning education network, and providing multiple channels for obtaining career planning education knowledge and information.
Design and Research of Visual Data Analysis Technology
33
Chinese career planning education is still very backward. In order to improve the effectiveness of career planning, new channels must be explored. In the school, establish a complete consulting system, including professional consulting and career consulting, provide international students with a full range of services such as professional selection, career selection and own condition analysis; improve information service resources, establish a dedicated career counseling resource room, and provide counseling books and materials facilitate learning and problem-solving; improve the network information service system to facilitate students to get in touch with companies or employers through the Internet, and to keep abreast of career development trends. Outside the school, strengthen the contact with the employer, invite the head of the enterprise to the school to give lectures, hire enterprise professionals to teach students on the spot, and complete the role transition from student to employee as soon as possible.
References 1. Lv, J.Y.: Research on the teaching of career planning course for international students in China under the learning paradigm. Educ. Teach. Forum 12(44), 306–307 (2020) 2. Wang, J.: Study on the issues and Countermeasures of career planning education in Colleges and Universities: taking Guizhou Institute of Technology as an example. Guide Sci. Educ. 11(8), 165–166 (2020) 3. Chen, L.R., He, F.F., Wu, Z.L.: Analysis and countermeasures of the problems in college students’ career planning: taking Hubei Institute of technology as an example. Young Soc. 68(24), 267–268 (2019) 4. Shi, L.Q.: An analysis on the management countermeasures of college students’ career planning under the new situation. Bus. News 38(29), 191–192 (2020) 5. He, S.F.: The analysis of the present situation of college students’ career planning teaching and its reform countermeasures. J. Hubei Open Vocat. Coll. 35(15), 56–57 (2020) 6. Li, J.F.: Analysis on the current situation and countermeasures of contemporary college students’ career planning. J. Xinyang Agric. Forestry Univ. 30(1), 135–137 (2020)
Analysis of Big Data Survey Results and Research on System Construction of Computer Specialty System Xueyan Wang(B) Liaoning Institute of Science and Engineering, Jinzhou, Liaoning, China
Abstract. The comprehensive quality training system for college students is of great significance for cultivating high-quality talents of the times and social development, and achieving the “Two Centenary” goals. In order to improve the training quality of computer major, the comprehensive quality training system for computer major is investigated and studied. Based on the findings of the survey and research, the problems of the comprehensive quality of college students were analyzed, and the basic principles of the construction of comprehensive quality training system for college students were followed, and comprehensive quality training system for computer major was constructed, including scientific literacy, professional competence, humanities and art, physical and mental health, practice and innovation. The research results serve the teaching reform of the computer major, and help cultivate applied talents with the ability to solve practical problems, innovation ability, sustainable development ability and good professional quality. Keywords: Computer major · College students · Comprehensive quality · Training system · Investigation and research
1 Introduction Comprehensive quality is the multi-faceted subjective quality developed by an individual under the influence of education and environment. The core quality education of comprehensive quality refers to the cultivation of “ideal, moral, educated, and disciplined” socialist successors, and comprehensively cultivate the noble moral sentiments, rich scientific and cultural knowledge, and good health of the educated. With psychological quality, strong practical and hands-on ability, and healthy personality, students can learn to behave themselves, learn to work, learn to exercise, and learn aesthetics, so that students can develop in a comprehensive and coordinated manner in “morality, intelligence, physical, beauty, and labor” [1]. Quality education focuses on the comprehensive quality of each student, including world outlook, outlook on life and values. Quality education emphasizes that it is necessary to have a wealth of knowledge and cultural literacy, as well as solid professional abilities, as well as a healthy body and the good psychological quality. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 34–42, 2022. https://doi.org/10.1007/978-981-16-5857-0_5
Analysis of Big Data Survey Results and Research on System Construction
35
Computer major training with good moral and literacy, abides by laws and regulations, masters basic knowledge of mathematics and natural sciences, and basic theories, basic knowledge, basic skills and basic methods related to computing systems, and has scientific thinking ability including computational thinking ability to design computing solutions and implement systems based on computing principles, be able to express clearly, play an effective role in the team, have a good overall quality, be able to expand abilities through continuing education or other lifelong learning methods, understand and follow the professional development of the subject. In the relevant fields of computing system research, development, deployment and application, the high-quality specialized technical talents with employment competitiveness [2]. Through social surveys, it is found that there is a shortage of engineering application talents capable of designing, developing, and popularizing computer systems in the field of information technology in my country. The society needs engineering applied talents with comprehensive abilities such as professional knowledge, professional practical ability and social ability [3]. In terms of professional knowledge, master the basic knowledge, methods and technologies of computer science, and have the ability to build computer systems and application abilities; in terms of professional practical ability, use professional theoretical knowledge and methods to solve practical problems, have the ability to comprehensively apply knowledge and methods, and target engineering practical technological innovation; in terms of social skills, with language expression ability and personality development ability, team spirit and independent learning ability.
2 Existing Problems on Comprehensive Quality for College Students The survey found that the current mainstream of the overall quality of college students is positive, manifested in the following aspects [4]: first, the sense of social identity has increased. China’s economic construction has achieved world-renowned achievements, political stability, economic development, and improvement in people’s lives. These achievements have aroused the enthusiasm of college students to love the Party, the country, and love socialism. They firmly support the party’s line, principles and policies. The road of ism is full of confidence. Second, requiring progress becomes the mainstream of college students. The overall ideology and morality of college students are stable and healthy, which improves the enthusiasm and initiative of learning knowledge. Third, the sense of social responsibility has increased. Most college students actively participate in various social practice activities such as annual volunteers, social surveys, helping students in difficulties and voluntary blood donations. Fourth, the awareness of becoming talents has gradually increased. Be positive and enterprising, and the learning consciousness will be further enhanced. However, with the development of the market economy, various practical problems and tests have increased, and some college students’ comprehensive quality has also appeared some problems, which are prominent in the following aspects.
36
X. Wang
2.1 Imbalance in the Development of Ideological and Moral Qualities In the period of social transformation, information is rapidly expanding, multi-cultural fusion and collision, and values tend to be diversified, resulting in diversification and uncertainty of social behavior. Contemporary college students who are in the period when their outlook on life, world outlook and values are formed with quick and active thinking. Affected by social tendencies such as money worship, hedonism, and extreme individualism, they have become confused and lost in their ideological beliefs and their moral concepts have changed. Some college students have serious individualistic thoughts, self-centered, lack of social responsibility and historical mission, lack of the master spirit of dedicating youth to national progress and national prosperity, and there is a phenomenon of “knowledge” but not “action” in personal morality. 2.2 Mental Health Problems Occur Frequently The mental health of college students has become the focus of social attention. As more and more college students suspend school due to psychological problems, drop out of school, and even commit suicide or homicide, they can be divided into two categories: one is general growth psychological problems, the tendency of psychological obstacles but not serious, this is the main psychological problem of college students; the other is the presence of different degrees of psychological obstacles, mainly including: environmental changes and psychological adaptation problems, psychological problems caused by improper learning psychological adjustment, emotional control, self-awareness, personality development, and volitional qualities are relatively weak, causing psychological and behavioral deviations in interpersonal communication, love, and sex. 2.3 Obviously Insufficient Scientific Literacy Cultivation Scientific literacy is a compound concept that integrates scientific knowledge, scientific methods, scientific attitudes, and scientific values. The scientific literacy that college students should possess includes a scientific attitude towards things, a reasonable basis of scientific knowledge, and the understanding and mastery of scientific methods to form the ability to use science and technology, especially creative ability. The current lack of scientific literacy cultivation of college students is manifested as follows [5]: superficial understanding of scientific and technological knowledge, lack of understanding and mastery of scientific methods, lack of the spirit and value vision of scientific experiments; lack of scientific spirit, deviation of scientific value orientation; scientific skills are lack of places and ways for training and exercise, and even less funding for scientific research activities. 2.4 Lack of Innovative Spirit and Practical Ability Training The reform of the education system lags behind the needs of social development, and the lack of evaluation of college students’ innovative spirit and practical ability training has resulted in the failure to effectively reflect the innovative potential of college students; open education stays at the exploratory stage, and the student’s dominant status
Analysis of Big Data Survey Results and Research on System Construction
37
is not reflected; the curriculum of innovative spirit and practical ability is unreasonable, the teaching content is outdated, the basic courses are few, the elective and compulsory courses are out of proportion, and there are few interdisciplinary and cross-professional courses; the innovative spirit and practical ability training model is single, the teaching method is backward, and there is a transitional phenomenon in practical teaching; teachers lack innovative consciousness, lack dual-qualified teachers and teachers’ practical ability improvement path [6].
3 Basic Principles on Construction of Comprehensive Quality Training System for College Students Usually in-depth investigation and research and long-term teaching practice, refer to relevant literature [7], summarize the basic principles of the construction of a comprehensive quality training system for college students: first, the combination of science education and humanistic education broadens students’ horizons, enriches the knowledge system, and promotes the improvement of comprehensive scientific quality. Second, the combination of talent cultivation and personality cultivation, pay attention to the cultivation of students’ health in mind and body, and realize the connotation requirements of talent cultivation. Third, the combination of the first classroom and the second classroom, the second classroom is the extension and supplement to the first classroom, realizing the leap from theoretical quality to practical quality of students. Fourth, the combination of quality development and ability training. Open up quality expansion education channels, improve the ability structure, enhance the ability to use professional knowledge, and realize the overall improvement of their own ability and quality.
4 Comprehensive Quality Training System for College Students of Computer Major The survey found that the comprehensive quality training system for computer major mainly consists of five aspects, as shown in Fig. 1. Scientific literacy system Comprehensive quality training system for college students of computer major
Professional competence system Humanities and art system Physical and mental health system Practice and innovation system
Fig. 1. Comprehensive quality training system for college students of computer major
38
X. Wang
4.1 Scientific Literacy System Scientific literacy is a compound concept that integrates scientific knowledge, scientific methods, scientific attitudes, and scientific values. The scientific literacy that college students should possess mainly includes three aspects: first, scientific attitude toward things; second, understanding and mastering scientific methods, and forming the ability to use science and technology, especially creative ability; third, reasonable scientific knowledge base. Scientific literacy is an important part of the comprehensive abilities that college students must have. The level of scientific literacy of college students directly affects the development level and prospects of the future society, is related to the great rejuvenation of the Chinese nation, and carries the hope of the country and the nation. Facing the new era, college students must not only possess profound professional knowledge of subjects, but also have the scientific methods of applying and updating knowledge, as well as the pioneering scientific spirit and creative ability. Improving the scientific literacy of college students is a complex systematic project. In addition to creating a positive environment and public opinion in the whole society, colleges and universities should play the main function and responsibility of cultivating the scientific literacy of college students [8]. The curriculum system for cultivating the scientific literacy of computer major is shown in Fig. 2. Curriculum system of scientific literacy Higher mathematics
Linear algebra
Discrete mathematics
College physics
Probability theory and mathematical statistics Information retrieval
Digital circuit and logic design Scientific paper writing
Fig. 2. Curriculum system of scientific literacy
4.2 Professional Competence System The professional abilities of computer major are embodied in four aspects: first, engineering knowledge, with the mathematics, natural sciences, engineering foundation and professional knowledge required for computer major, and comprehensively use the knowledge learned to solve complex engineering problems in the field of computer technology. Second, problem analysis, comprehensively use basic principles and methods of mathematics, natural sciences, and engineering sciences to identify, express, and analyze complex engineering problems in the field of computer technology through literature research to obtain effective conclusions. Third, design/develop solutions, comprehensively use theory and technical means to propose solutions to complex engineering problems in the field of computer technology, design systems, modules or development
Analysis of Big Data Survey Results and Research on System Construction
39
processes that meet specific needs, and reflect the sense of innovation in the design and development process. Comprehensive consideration of social, health, safety, legal, cultural and environmental factors. Fourth, research, based on scientific principles and using scientific methods to study complex engineering problems in the field of computer technology, formulate technical routes and design experimental plans, analyze and interpret data, and obtain reasonable and effective conclusions through information synthesis. The curriculum system for cultivating the professional ability of computer major is shown in Fig. 3. Curriculum system of professional competence Introduction to computer systems
Big data technology and application
Computer system structure
Fundamentals of programming
Object oriented programming
Data structure and algorithm
Principles of computer composition
Introduction to artificial intelligence
Software engineering
Operating system
Computer graphics
Database system
Network information security
Software project management
Java development foundation
Computer network
Python programming
Mobile software development
Web application development
J2EE development technology
Fig. 3. Curriculum system of professional competence
4.3 Humanities and Art System The comprehensive promotion of humanities and artistic literacy education requires facing modernization, facing the world, and facing the future. This vision has already pointed out the direction for the development of higher education. Humanities and artistic literacy are conducive to enabling college students to be subtly influenced by culture through learning scientific and cultural knowledge, and then through participating in cultural activities and social practice, to infect the humanistic spirit, enhance their personality, improve their ideological realm, and stimulate patriotism Emotion has become a newcomer with the “Four Haves” useful to the cause of socialism; humanities and artistic accomplishments are conducive to broadening horizons, active thinking, and inspiring reform and innovation. The deficiencies of humanities and artistic literacy have become
40
X. Wang
obstacles restricting the development of college students. Socialist construction requires not only professional talents, but also tale nts with comprehensive development and high comprehensive quality. Humanities and artistic literacy are the stepping stones for the formation of the university into the comprehensive development of talents. If an individual wants to realize self-worth, become a comprehensively developed talent. The curriculum system for cultivating the humanities and artistic literacy of computer major is shown in Fig. 4. Curriculum system of humanities and art system Chinese revolutionary history
College English
Basic principles of Marxism
Music appreciation
MAO Zedong thought and the system of theories of socialism with Chinese characteristics
Art appreciation Appreciation of classical poetry
College Chinese
Ancient Chinese literature
Fig. 4. Curriculum system of humanities and art system
4.4 Physical and Mental Health System Physical and mental health includes two aspects: physical health and mental health. Undergraduates are in the period of physical growth and development. Physical exercise can promote the growth of bones and muscles. At the same time, exercise can enhance the cardiopulmonary function. It also has a certain enhancement in the function of the blood circulatory system, respiratory system and digestive system. Enhance immunity, effectively regulate various functions of the body, and improve the body’s ability to adapt to the environment [9]. Good psychological quality is the basic condition for college students’ mental health, which will directly affect the level of mental health. The Curriculum system of mental health system Entrance education
College sports
Ideological and moral cultivation
Safety education for college students
Legal basis
Situation and policy
Mental health of college students
Military theory and skill training
Fig. 5. Curriculum system of mental health system
Analysis of Big Data Survey Results and Research on System Construction
41
main purpose of colleges and universities offering mental health education courses is to comprehensively improve students’ psychological quality and promote healthy growth [10]. Through years of exploration and accumulation of practice, the construction of the mental health education curriculum system has achieved remarkable results, and a unified understanding of teaching objectives, teaching concepts and methods has gradually formed. The curriculum system for cultivating the physical and mental health of computer major is shown in Fig. 5. 4.5 Practice and Innovation System The practice and innovation system is based on practical teaching and also includes innovation and entrepreneurship education. Practical teaching is an effective way to consolidate theoretical knowledge and deepen theoretical understanding, is an important link in cultivating high-quality applied talents with innovative consciousness. The practical teaching of the computer major is very important. Through the integration and optimization of the practical links, a set of moderately difficult, rich content, strong operability, multi-course knowledge integration, and practical work, expandable, modular professional design. The curriculum practice teaching system is carried out in the practice links of multiple courses in stages, effectively solving the contradiction between limited hours and teaching content. Innovation and entrepreneurship education is aimed at cultivating talents with basic entrepreneurial qualities and creative personalities. It is not only an education focused on cultivating students’ entrepreneurial awareness, innovative spirit, and innovative entrepreneurial abilities, but also for the whole society and targeting. Those entrepreneurial groups who plan to start the business, have already started the business, and have successfully started the business will be educated in the Curriculum system of practice and innovation system College physics experiment
Database system training
Software development training
Software engineering training
Computer network training
Mobile development training
Web development training
Enterprise practice
Computer composition principle training
Network information security training
Career planning for college students
Innovation and entrepreneurship education
Entrepreneurship foundation of college Students
Employment guidance for college students
Production practice
Graduation project
Fig. 6. Curriculum system of practice and innovation system
42
X. Wang
cultivation of innovative thinking and entrepreneurial ability in stages and levels. The practical and innovative curriculum system for cultivating computer major is shown in Fig. 6. Acknowledgment. This work is supported by Scientific research funding project of educational department of Liaoning province in 2019: Investigation and research on comprehensive quality training system for college students of computer major.
References 1. Fan, W.H.: Problems and countermeasures of improving the comprehensive quality of college students. Ind. Sci. Tribune 9(2), 173–175 (2010) 2. School of Computer and Information Engineering, Jiangxi Agricultural University. Training program for computer science and technology professionals. http://jixin.jxau.edu.cn/0c/6e/ c1862a68718/page.htm. Accessed 15 Dec 2020 3. Xu, Z.H., Gu, J.H., Dong, Y.F., et al.: Research on the construction of comprehensive quality training system for engineering applied computer professionals. Comput. Educ. 11(17), 95–99 (2013) 4. Li, Z.M.: Problems and countermeasures in the cultivation of college students’ comprehensive quality. J. Shandong Univ. Technol. (Soc. Sci. Ed.) 22(4), 102–104 (2006) 5. Zhang, C.C., Liu, Y.L., Liu, C.X.: Analysis on the scientific literacy of current college students and its problems. Educ. Teach. Forum 4(12), 23–24 (2012) 6. Zhang, L.L.: Exploration on the cultivation mode of college students’ innovative spirit and practical ability. China Train. 24(18), 40–41 (2015) 7. Cui, M.C., Dong, Y.Z., Liu, W.: Computer majors student. Contemp. Educ. Res. Teach. Pract. 6(12), 173–174 (2018) 8. Ge, C.X.: How to improve the scientific literacy of college students in the new era: taking Zhengzhou University as an example. Chin. Univ. Sci. Technol. 33(8), 58–60 (2019) 9. Xia, C.H.: Analysis on the importance and approaches of college students’ physical fitness. China Sports Daily, 15 January 2018 10. Chen, L.J.: Practice and reflection on the construction of curriculum system of mental health education in higher vocational colleges. J. Heilongjiang Inst. Teach. Dev. 39(8), 55–57 (2020)
Construction of Online Reading Corpus Based on SQL Server Database Management System Qiong Wu(B) Liaoning Institute of Science and Engineering, Jinzhou, Liaoning, China
Abstract. New media technology refers to the fact that new media based on Internet technology has inherent technical advantages and information service functions as media. It is the best choice for the network economy to connect with the media industry. With the advent of the new media era on the Internet and changes in media communication methods, new media can be derived from smart devices to obtain information and content, and it can also be distributed on smart devices. Users correspond to different media portals in different usage scenarios. This paper analyzes the advantages of college English online reading, and builds a cloud storage structure model consisting of a data storage layer, a basic management layer, an application interface layer and a user access layer; based on the SQL Server database management system, the data storage structure is designed; practice application strategies are proposed. Keywords: Era of Internet new media · Online reading resource library · Construction and application · Cloud storage
1 Introduction Reading plays an irreplaceable important role in English learning, and the amount of reading largely determines the level of English. English reading ability is the basis and precondition for the development of other language skills, and reading has a positive role in promoting English learning: English reading is an important means to increase interest in English learning and increase knowledge. With the continuous improvement of reading ability, language knowledge continues to increase, and further motivate interest in English learning. Through extensive English reading, students gain knowledge, increase their knowledge, and broaden their horizons. Reading can cultivate and enhance students’ sense of language. The sense of language is naturally formed in the long-term language practice. Reading can effectively improve the level of English writing. Writing is the emphasis and difficulty of English teaching. To write good articles in English, besides solid language foundation, a lot of reading is also indispensable. Traditional English reading classroom teaching has certain limitations. Insufficient teaching time, monotonous teaching by teachers, incomplete textbook content, insufficient reading resources, passive acceptance by students, and unsuitability for aptitude teaching have long plagued English reading. Reasonable use of information technology © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 43–50, 2022. https://doi.org/10.1007/978-981-16-5857-0_6
44
Q. Wu
and online resources can not only serve as a supplement to school English education, broaden students’ English learning channels, but also promote changes in learning methods. With the help of online English reading resources, teaching circumstances are presented according to the content of the teaching. Through information such as sound and images, students’ hearing and vision are stimulated, and students’ enthusiasm for thinking is stimulated, so that students can perceive quickly and remember firmly, thereby mobilizing students’ observation and imagination strength, deepen the impression of learning, and improve learning efficiency.
2 Advantages of College English Online Reading With abundant online reading resources, teachers can carry out selective teaching according to the overall learning characteristics of students, and students can choose resources at will to carry out extracurricular independent learning. Compared with traditional paper-based reading materials, college English online reading has obvious advantages, which are mainly reflected in the following aspects: (1) The reading content is rich, contemporary and selective. Although textbooks are revised every once in a while, the novelty of content always lags behind magazines and newspapers. For magazines and newspapers, teachers can only use limited options, because the content may not be suitable for English teaching. Online resources are highly time-sensitive and selective. Students can choose different reading materials according to their own foundation or interest. Teachers can also clarify the learning tasks, designate students to visit specific English learning websites, gradually increase the difficulty of reading materials and expand the vocabulary, and help students expand their knowledge and accumulate culture. (2) Provide the authentic and contextual reading atmosphere [1]. Fast search engine facilitates information retrieval. As long as you enter a keyword, you can quickly find the information you need. Many English websites provide original news reports, economic reports or humorous stories. In a real reading environment, students and people in English-speaking countries enjoy the same reading resources, share the same social environment and cultural background, and truly realize the authenticity and contextualization of the English reading atmosphere, and truly integrate into the British and American culture. (3) Transform from “full-teaching” to “interest-based” teaching mode. Online teaching content is no longer limited to textbooks, but introduces more rich and substantial knowledge through the Internet. Teachers use students’ interest as a breakthrough point, collect rich English knowledge through the Internet, and use appropriate circumstances or problems to attract students’ attention. Strengthen students’ enthusiasm, broaden their horizons, and form an active English reading atmosphere. At the same time, under the network environment, students can communicate and interact on a certain reading topic. (4) Realize teaching students in accordance with their aptitude and cultivate the ability of judgment and reasoning [2]. Students have differences in learning interests, learning styles, learning strategies and learning starting points. From the perspective of teaching students in accordance with their aptitude, the rich and diverse
Construction of Online Reading Corpus
45
reading resources in the reading resource library enable students to determine the starting point and goal of reading according to their actual needs, and choose the reading content according to their personal interests, comprehension ability and learning progress, and truly student-centered, cultivate problem-analyzing and problem-solving skills, multi-perspective and multi-dimensional thinking, and strengthen students’ judgment and reasoning in language use and cross-cultural communication skills.
3 Cloud Storage Structure Model of College English Online Reading Resource Users can use any internet-connected device at any time or anywhere to connect to the cloud to conveniently access data. The cloud storage structure model of college English online reading resources is shown in Fig. 1 [3–5]. User access layer Data storage service, space rental service, public resource service, multi-user data sharing service, data backup service, etc Application interface layer Network access, user authentication, rights management, service level agreement, etc Public API interface, application software, WebService, etc Basic management layer Cluster system, distributed file system, network computing
CDN, P2P, data deduplication, data compression
Data encryption, data backup, data disaster recovery
Data storage layer Storage virtualization, centralized storage management, status monitoring, maintenance upgrade, etc Storage device (NAS, FC, iSCSI, etc)
Fig. 1. Cloud storage structure model of college English online reading resource
The data storage layer is the basic layer of the cloud storage system, which is composed of FC fiber channel storage devices, NAS storage devices or DAS storage devices. Considering data redundancy and energy consumption, storage clusters are usually distributed in different regions. It can manage the storage devices in the system, make them work together, provide services to the outside world in a unified manner, and improve data
46
Q. Wu
access performance. Cloud storage service providers can offer different access methods according to different service types and users to ensure data security and service quality.
4 Storage Data Structure of College English Online Reading Resource In the software system, the data is usually stored in the database. This article designs the data structure based on the SQL Server database management system and refers to the related literature [6]. The design results are shown in Table 1. Table 1. Storage data structure of college English online reading resource No Field description
Field name
Type
Width
0
Main key
ZGJZ
Bigint
8
1
Resource code
ZYBM
Char
10
2
Resource name
ZYMC
Varchar
100
3
Type code
LXBM
Char
2
4
Type name
LXMC
Varchar
20
5
Source code
LYBM
Char
2
6
Source name
LYMC
Varchar
20
7
Applicable code
SYBM
Char
2
8
Applicable name
SYMC
Varchar
20
9
Format code
GSBM
Char
2
10
Format name
GSMC
Varchar
20
11
Producer code
ZZZBM Char
4
12
Producer name
ZZZMC Varchar
50
13
Uploader code
SCZBM Char
4
14
Uploader name
SCZMC Varchar
50
15
Resource size
ZYDX
Decimal
6,2
16
Number of attachments
FJSL
Smallint
2
17
Number of views
LXCS
Int
4
18
Number of downloads
XZCS
Int
4
19
Storage address
CCDZ
Varchar
1000
20
Upload date
SCRQ
Datatime 8
21
Upload time
SCSJ
Datatime 8
22
Upload IP
SCIP
Char
15 (continued)
Construction of Online Reading Corpus
47
Table 1. (continued) No Field description
Field name
Type
Width
23
Evaluation grade
PJDJ
Tinyint
1
24
Evaluation text
PJWB
Varchar
1000
25
Use environment
SYHJ
Varchar
1000
26
Operating guide
SYSM
Text
16
27
Resource introduction
ZYJJ
Text
16
SQL Server database has many advantages, SqlServer is a database product with full Web support, providing core support for extensible markup language, and the ability to query on the Internet and outside the firewall [7].
5 Practical Application of College English Online Reading Resource Library The practical application of the college English online reading resource library requires comprehensive consideration of various factors and formulate practical application strategies. The strategies proposed in this article are as follows: (1) Formulate the principles for the construction of English online reading resource library and implement them carefully. The construction of an English online reading resource library is a complex systematic project, which needs to be guided by specific principles: practicality and authenticity principles to meet the needs of the teaching situation, and real materials are easy to motivate students’ interest and resonance in learning. The principles of discipline and individualization, highlighting the characteristics of English disciplines, emphasizing the individual needs of students, and providing diversified resources in the face of different objects and different skills. The principle of standardization and extensibility, standardization affects the quality of construction and service quality, helps the continuous expansion of the resource library, and realizes data exchange and resource sharing. The principle of ease of use and interactivity meets the needs of students for “suitable use, timely use, and appropriate use”, embodying the essence of digitalization, highlighting innovation and interactivity, and enabling students to achieve autonomy or three-dimensionality through human-computer interaction or interpersonal interaction learning. (2) Take effective measures to eliminate the negative effects of online English reading. A large number of original materials and abundant reading resources online provide an effective way for students to acquire knowledge. However, the students’ thinking is not mature enough, the content of online reading is mixed, there are false information and even unhealthy content, which may mislead students and even affect their physical and mental development. More seriously, they may lead
48
Q. Wu
students astray. In addition, prolonged use of mobile phones and computer screens to read can easily cause visual fatigue, affecting reading speed and understanding of content. The development and utilization of online resources and the optimized design of the English reading process can not only expand students’ vocabulary and knowledge, cultivate correct reading skills and the ability to grasp the text, but also improve learning initiative and effectively improve reading ability and comprehensive ability to use language. (3) Create effective practical activities and increase the development of English online reading resources. For the development of college English online reading resources, the following measures can be taken: schools and teachers can start by integrating English newspapers and radio and television programs, starting from students’ interests and hobbies, using libraries, language laboratories and audio equipment to organize students to watch or listen to English programs, develop English reading in a unique form, achieve the purpose of developing English online reading resources, guide students to understand British and American culture, and promote English learning. In the process of developing English online reading resources, teachers can fully mobilize the enthusiasm of students, design theme activities in connection with students’ daily campus life, shorten the distance between students and reading activities, guide more students to devote themselves to it, and promote students language ability development. (4) Strengthen teacher information technology training and encourage teachers to participate in the construction of online English reading resources. Using modern educational technology to carry out teaching practice activities, hardware is the foundation, and teachers’ mastery of modern educational technology is the key. In view of the current situation that English teachers are generally weak in information technology capabilities, schools need to carry out various forms and levels of training activities to improve information technology capabilities with computer applications and network foundations as the core. Only with the extensive participation of teachers can rapid progress be achieved in resource construction and the quality and abundance of reading resources can be guaranteed. The school formulates corresponding policies to link the construction of English reading resources to teachers’ performance, bonuses, selection of excellence, and teaching achievement awards. Provide a certain amount of funding for the teacher’s personal reading resource library or teaching website, and give certain guidance in the construction and maintenance. (5) Continuously optimize online reading resources for college English. In accordance with the requirements of knowledge, interest, hierarchy, gradualness, professionalism, and versatility, we will continue to advance from “general requirements” to “higher requirements” to “highest requirements” to optimize English online reading resources. Make it directly related to English language teaching. In order to give full play to the role of English online reading resources, classification is made according to subject content, text type and keywords, so that students can read selectively according to their own needs. Choose materials that are easy to read and understand, comprehensively consider factors such as average sentence length, number of new words, and grammatical complexity, so that it meets the language level, knowledge background, academic level and psychological state of college
Construction of Online Reading Corpus
49
students, inspires college students’ reading motivation and explores deeper levels. Reading has achieved a leap in knowledge acquisition, language skills development and humanistic literacy improvement. (6) Improve the technological advancement in the construction of English online resource library. The rapid development of information technology provides strong support for storage, distribution, organization and display based on Internet technology. The online teaching resource library has been developed into a “large-scale online open course”, which provides teaching explanations based on videos and rich texts, which can show the complete teaching process. The construction of English online reading resource database is not only a set of website system based on B/S structure, but also the development direction of diversification of user terminal equipment and richness of application software form [8]. Support diversified terminal computing devices, including PCs, iPads, mobile phones and laptops, making learning more convenient. It is cross-platform and can be used as long as the browser is installed. For the mobile terminal environment, APP is a better choice, which can easily call various hardware devices of the mobile phone and can cache related resources such as videos. (7) English online reading resources are integrated into the curriculum ideological and political content. Curriculum ideology and politics is a brand-new comprehensive education concept, which integrates ideological value guidance into knowledge transfer and ability training, and fundamentally solves the problem of the disconnection between professional education and ideological and political education. Promoting ideological and political courses and curriculum ideological and political coordination, and integrating ideological and political work through the whole process of education and teaching is a strategic measure to implement the fundamental task of cultivating morality [9]. The generality and cultural nature of college English courses determine the feasibility of developing curriculum ideology. Reading teaching is an important link for students to expand humanistic knowledge, and they must shoulder the mission of curriculum ideology. Select online reading resources, cultivate students’ humanistic feelings and ideological and moral sentiments, adopt online and offline methods to attract students to actively learn language knowledge and understand Chinese culture, strengthen the ideological and political mission of English courses, and comprehensively improve the richness and freshness of mainstream culture.
References 1. Zhang, Y.: Making full use of advantages to improve higher vocational students’ English reading abilities. J. Zhangjiakou Vocat. Tech. Coll. 24(2), 76–78 (2011) 2. Wang, Y.L.: The construction of college English reading teaching resource bank. J. Xinjiang RTVU 14(4), 61–64 (2010) 3. Lei, Y.T.: Architecture and key technologies of cloud storage system. China Public Secur. 15(5), 83–87 (2016) 4. Peng, H.Q.: Analysis of cloud storage model and architecture. Digit. Technol. Appl. 33(4), 76–77 (2015)
50
Q. Wu
5. Dong, Q.X., Mu, D.S., Tang, Q.A.: Research on storage architecture of Digital Campus Based on cloud storage. China Manag. Inf. 18(9), 87–88 (2015) 6. Zhao, M.: Computer majors student. Inf. Technol. 39(6), 83–86 (2015) 7. After class learning network. Advantages and disadvantages of SQL Server. https://xuexi.zqnf. com/252091.htm. Accessed 17 Feb 2021 8. Ma, L.: Research on the support technology of the online teaching resource library. Jiangsu Sci. Technol. Inf. 35(23), 67–70 (2018) 9. Xia, W.H., He, F.: On the mission of College English ‘Ideological and political education.’ People’s Tribune 28(30), 108–109 (2019)
The Design and Application of College Japanese Reading Teaching System Based on Android Fangting Liu1(B) and Shuang Wang2 1 College of Foreign Languages, Bohai University, Jinzhou, Liaoning, China 2 School of Physical Education, Bohai University, Jinzhou, Liaoning, China
Abstract. The Android platform is a free and open source operating system based on the Linux kernel, which is mainly used in mobile devices such as smart phones and tablets. Android’s system architecture, like its operating system, adopts a layered architecture. From high-level to low-level, they are application layer, application framework layer, system runtime layer, and Linux kernel layer. This paper analyzes the system architecture of the Android platform, designs an Androidbased college Japanese reading teaching system technical framework composed of Android client, server and Web management, and proposes an Android-based university Japanese reading teaching link, and comprehensively promotes the reform of Japanese reading teaching in colleges in the mobile information era. Keywords: Android · System design · College Japanese · Reading teaching · Technical framework
1 Introduction With the rapid development of international trade, the language learning fervor continues to spread. In addition to English, colleges have also developed a strong interest in foreign languages such as Japanese, Russian, Korean and Arabic, and have begun to learn various foreign languages and master relevant cultural knowledge [1]. Japan and China face each other across the sea. Their economic development is rapid and their language system is relatively complete. They strengthen their understanding and learning of Japanese and can have a deeper understanding of Japanese cultural development. In particular, effective Japanese reading and learning can help students understand Chinese and Japanese cultures. Similarities and differences, continue to expand their own vision and improve their comprehensive literacy. The current teaching hours of Japanese reading is relatively small, and the content of outdated textbooks not only fails to keep up with the times, but also easily hinders students’ enthusiasm for learning Japanese. The training standards of talents are difficult to adapt to social needs. Mobile reading has become an effective solution to these problems. Android is a Linux-based operating system for mobile devices, mainly used in mobile devices, including smart phones and tablets. Android’s cheapness and diversity make it have the highest market share, and it is very popular among students. Its open source © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 51–59, 2022. https://doi.org/10.1007/978-981-16-5857-0_7
52
F. Liu and S. Wang
feature reduces the requirements for hardware devices, improves hardware customization and flexibility, and creates excellent conditions for the application of Android in the education field. This article is based on Android research to improve the quality of Japanese reading teaching in the mobile information era, and cultivate more Japanese application-oriented talents.
2 Analysis of Android Platform Architecture The Android platform architecture is shown in Fig. 1. (1) Applications. It is mainly designed around the four major components of various programs. Activity, display an interface to the user, receive information entered by the user for interaction. Service, perform a series of computing tasks in the background. BroadcastReceiver, pass messages between different components or different applications. ContentProvider, share data with other components and other applications. (2) Application framework. It provides the four major components used by the application layer, various interface controls and various events corresponding to them, and provides a programming foundation for the application layer. The usual development of applications is to call the API provided by this layer. Activity Manager, manage the application life cycle and provide common navigation rollback functions. Notification Manager, the application displays customized prompt information in the status bar. (3) Systems run time libraries. It provides most of the functions in the core libraries of the Java programming language. Dalvik is designed as multiple virtual machines that enable the device to run efficiently. The Dalvik virtual machine executes the Dalvik executable file format, which is optimized for minimum memory usage. The virtual machine is register-based and runs the classes compiled by the Java programming language, which are converted to the dex format by the built-in dx tool. (4) Linux kernel. As an abstraction layer between hardware and software, it hides specific hardware details and provides unified services for the upper layer. Provide display driver, camera driver, WiFi driver and other hardware drivers and Linux system kernel. The kernel as an abstraction layer exists between hardware and software, powerful memory management and process management, permissionbased security mode, support for shared libraries, and certified drive mode.
3 Methods of College Japanese Reading Teaching Teaching methods are important links between teacher teaching and student learning, are necessary conditions for completing teaching tasks, are important guarantee for improving teaching quality, and have important impact on the physical and mental development of students. The commonly used teaching methods of college Japanese reading teaching methods are as follows:
The Design and Application of College Japanese Reading Teaching System
53
Applications Home
Contacts
Phone
Brower
Windows
Settings
Calendar
Map
Calculator
…… Application framework Activity Manager
Window Manager
Content Provider
View System
Telephony Manager
Resource Manager
Package Manager
Location Manager
XMPP Service
Notification Manager
…… Systems run time libraries
Libraries Surface Manager
Media Framework
Android run time
OpenGL|ES
FreeType
Core Libraries
SGL
SSL
Dalvik Virtual Machine
SQLite
WebKit
Libc
ಹಹ
Linux kernel Display Driver
Camera Driver
Flash Memory Driver
KeyBoard Driver
WiFi Driver
Power Management
Audio Driver
Binder (IPC) Driver
USB Driver
Bluetooth Driver
……
Fig. 1. Android platform architecture
(1) Discourse analysis method. Discourse refers to paragraphs or articles that can express complete information, are logically connected, connected in meaning, and have specific communicative functions and meanings. The discourse analysis method mainly guides students to grasp the full text and understand the full text under the overall framework of the full text, focusing on cultivating logical thinking
54
F. Liu and S. Wang
and reading language skills, and clarify the language meaning of the text based on the cultural background and logical structure of the text. The content is speculated to improve reading level and language thinking ability. In Japanese reading teaching, discourse analysis can break through the traditional teaching mode, focusing on analyzing the sentence structure, grammar and vocabulary of the article, and combining the author’s era and cultural background to accurately grasp the subject and central idea of the article [2]. Use methods such as accurately understanding pronouns, clarifying omissions, understanding the article in context, and reading the full text to improve reading efficiency. (2) Task-based teaching method. The task-based teaching method requires teachers to design operable tasks based on the teaching objectives and specific communication modules and language units in the teaching activities. Students complete tasks through language activities such as expression, communication, negotiation, interpretation, inquiry, and evaluation. The purpose of mastering the language [3]. Task-based teaching method is based on tasks. College Japanese reading teaching should plan tasks at all stages, use tasks to arouse students’ curiosity in reading, clarify their own reading goals, and stimulate learning participation. In the actual task design, it conforms to the students’ cognitive laws and language laws, and realizes the interlocking of the early, mid and late tasks. The early tasks enable students to actively learn the language knowledge needed to complete the tasks, and in the mid-term, they construct an excellent learning situation to complete the reading goals. The late tasks guide students to improve their reading methods based on the evaluation feedback results. (3) Communicative teaching method. The communicative teaching method is the teaching method established by the British applied linguists Christopher Candi and Henry Widdowson. The theory comes from sociolinguistics and psycholinguistics, and is influenced by discourse analysis, language philosophy, anthropology and sociology. The core of communicative pedagogy is to use language to achieve the purpose of communication, more emphasis on specific application, rather than the ultimate goal of grammar rules and word usage. The communicative teaching method believes that language is a communication tool, and the reading materials used are large in capacity and broad in knowledge. It not only enriches students’ social and cultural knowledge, but also allows students to be more exposed to the culture of the target language and enhance their sensitivity to culture. At the same time, it helps students make better use of the reading skills they have learned, and they can also develop other communicative activities that help reading. Communicative teaching methods are diverse in forms and rich in content, which can stimulate interest in learning and mobilize students’ enthusiasm. (4) Experiential teaching method. The experiential teaching method refers to the creation of specific scenes or atmospheres suitable for the teaching content in order to achieve the teaching purpose in the teaching process, so as to arouse students’ emotional experience, help students quickly understand the teaching content, and promote comprehensive mental function. Experiential teaching method can cultivate people’s emotions, purify people’s hearts, and provide students with good hints or enlightenment, which is conducive to exercising creative thinking and cultivating adaptability. Experiential teaching method is applied to Japanese reading
The Design and Application of College Japanese Reading Teaching System
55
teaching. Teachers should set up background knowledge related to reading teaching content for students, and find reading background knowledge points through group discussion or teamwork communication [4]. Strengthen students’ active dialogue and communication, cultivate the habit of active thinking, let students actively participate in reading texts, integrate their accumulated knowledge and emotions, and fully enjoy the fun of reading in Japanese.
4 Design on Technical Framework of College Japanese Reading Teaching System Based on Android The technical framework is the reusable design of the whole or part of the technical system, expressed as a set of abstract components and the method of interaction between component instances, also known as the application skeleton customized by the technical developer. The technical framework of the college Japanese reading teaching system based on Android is shown in Fig. 2 [5]. (1) Android client. The Android client is a mobile app that provides students with a program interface and operating functions for reading Japanese. The Android client is developed in Java language, TCP/IP is the basic communication protocol used in the network, and the HTTPS URL is used to access the Web server. SQLite is a lightweight database used in the Android client database management system. The core engine itself does not rely on third-party software. All information is contained in one file, with excellent cross-platform and portability. (2) Server side. The server side uses Apache as the web server container, which is one of the most popular web server side software; MySQL as the server database system, the data structure supports dynamic applications and developer flexibility; JSP acts as a database response client Request processing procedures, various applications are easy to deploy, maintain and modify; after Julius and HMM model are integrated, they can be used as an API interface for voice recognition, which can better recognize voice commands and make the interaction between machines and people more intelligent. (3) Web management side. Developed using HTML5.0, which is the next generation standard of the Internet. The HTTPS protocol and TCP/IP protocol communicate with the server. HTML + CSS + JS is a classic software development technology solution. HTML, Hypertext Markup Language, uses markup tags to describe web pages and describes page structure from a semantic point of view; CSS, cascading style sheets, is responsible from an aesthetic point of view Page style; JS, JavaScript, describe page behavior from an interactive perspective.
56
F. Liu and S. Wang Android client User interface layer
Business layer Core layer Java
HTTPS
TCP/IP
API
SQLite
Server side Apache
Julius+HMM
MySQL
JSP
API
Web management side
HTML+CSS+JS API
HTTPS
TCP/IP
JSP
Fig. 2. System technological framework
5 Teaching Links of College Japanese Reading Based on Android In order to give full play to the advantages of mobile reading, the teaching of college Japanese reading based on Android is divided into the following four parts: 5.1 Pre-class Preparation Preview is a good learning habit, which can cultivate independent learning ability, improve independent thinking ability, improve learning efficiency, gain the initiative of classroom learning, and achieve the role of optimizing the overall structure of the classroom. Pre-class preparation reduces students’ dependence on teachers, enhances
The Design and Application of College Japanese Reading Teaching System
57
independence, and can cultivate various abilities such as reading, comprehension, analysis and synthesis. Through the pre-class preparation, students have a general understanding of the content of the next class. They can easily follow the teacher’s thinking when listening to the class, changing passive listening to active listening, and blind listening to listening with questions, which enhances the listening effect. The knowledge that has been understood in the preview will be explained by the teacher, and the impression will be even more profound. The content that is not understood will naturally become the focus of the lecture. The choice of reading materials should take into account both readability and interest, and the requirements should be appropriate and suitable to match the student’s level. On the basis of the selected content, the teacher formulates a feasible teaching plan for the teaching content, sorts out the key knowledge points, analyzes and sorts out the relevant cultural background materials, puts forward the key points and difficulties of the preview, and makes corresponding measures. Then through the WeChat group and QQ group of the Android mobile phone, the pre-class preparation requirements are pushed, and the data search or reading tasks are arranged to enable students to read purposefully or pertinently [6]. 5.2 Classroom Teaching Traditional classroom teaching includes six links: first, review and ask, to understand students’ mastery of the previous teaching content. The review content should focus on the key points and the questions raised should be targeted. Second, introduce new lessons, use the shortest time to concentrate students’ attention, adjust the learning state to enter the classroom situation, connect new and old knowledge, and arouse students’ thinking. Third, teach new knowledge, teach new knowledge to students in an easy-tounderstand way and method. The new knowledge mainly teaches the key points and difficulties, and the content that students have understood during the preview should not be repeated. Fourth, classroom exercises, not only check the teaching status of teachers, but also understand the status of students’ acceptance, and play a role in strengthening and consolidating knowledge. Fifth, summarize, summarize the main content of this lesson, emphasize the key and difficult knowledge, and let students clarify the content of learning and the content that should be mastered. Sixth, assign homework, centering on the knowledge and methods learned in class, assign homework for students to practice, and achieve the purpose of consolidating what they have learned. Combining the characteristics of teaching materials and the acceptable language level of students, select Android-based Japanese learning APP or related content of the official account to effectively attract students’ attention and guide students to actively participate in the whole teaching process. 5.3 After-Class Extension After-class reading is an extension and supplement of classroom teaching. It plays an important role in broadening students’ horizons, enriching students’ knowledge, improving students’ reading ability, and optimizing Japanese teaching as a whole. Japanese reading teaching can carry out four types of extensions: first, the topic extension of
58
F. Liu and S. Wang
the same category, presented in different materials, pay attention to the horizontal language connection, improve the understanding of the original materials, consolidate the knowledge structure, and cultivate innovative thinking and comprehensive language use ability. Second, the extension of cultural background, and the rich background knowledge arouses students’ interest in reading topics, and the reading material information is linked with the extended background knowledge to cultivate reading comprehension. Third, the extension of emotional resources, explore the connotation of the story from different angles, expand the emotional factors in the reading materials, comprehend the author’s thoughts and feelings, and improve the aesthetic appeal and humanistic quality. Fourth, the extension of imaginative thoughts [7], not limited to the original topic of the article, using divergent thinking to guide students to boldly speculate on relevant facts and phenomena, expand rich imagination, freely state opinions, and cultivate the ability to use language. With the help of Android mobile terminals, extracurricular reading is easy to achieve, cultivate students’ careful observation of Japanese language, experience cultural differences between China and Japan, and guide students to deal with cultural differences with tolerance and flexibility, which is helpful for the development of cross-cultural communication skills. 5.4 Assessment and Evaluation Assessment is an effective means to test the effect of teaching, and Android-based college Japanese reading teaching is suitable for formative evaluation. Formative evaluation is to use interaction, feedback, improvement and promotion as evaluation activities to grasp the overall situation of students’ reading and guide students to master scientific reading. Formative evaluation emphasizes the subjective participation of students and the richness of evaluation content. First, promote the interaction of evaluation objects. Obtain evaluation feedback information through interaction and create conditions for formative evaluation. Students interact with reading materials to understand the problems encountered by students in reading; students interact to understand the level of differentiated understanding when facing the same reading content; teachers interact with students to understand students’ thinking processes and grasp students’ specific performance. Second, use diversified evaluation methods. Formative evaluation does not have a fixed form, emphasizes dynamic evaluation, and can flexibly use diversified evaluation methods to promote students’ strong interest, find problems and deficiencies in reading, and find shining points. Third, focus on diversified evaluation content. Evaluate students’ language knowledge and skill level. Language knowledge involves vocabulary, sentences and grammar, and language skills involve text analysis, reasoning, prediction and summarization. The teaching goals are divided into different levels, and the overall evaluation is done according to the achievement of students’ goals. The evaluation content should involve reading learning attitude, cooperative inquiry ability and the application of learning methods, etc., and suggestions for improvement should be given based on the evaluation results.
The Design and Application of College Japanese Reading Teaching System
59
References 1. Ge, Y.M.: On the teaching method of Japanese reading. https://www.fx361.com/page/2018/ 0314/3231496.shtml. Accessed 14 Mar 2018 2. Cheng, Q., Peng, R.X.: The application of discourse analysis in Japanese reading teaching. Academy 13(29), 48–49 (2020) 3. Hou, L.J.: On using of task-based language teaching in Japanese reading in higher vocational education. J. Tianjin Coll. Commer. 12(3), 69–70 (2010) 4. Yu, P.: On the teaching methods of Japanese reading from the perspective of multi culture. Fresh Reading 18(11), 46–48 (2020) 5. Pu, J.N.: Design and development of Japanese learning system based on Android. Master’s thesis of Hunan University (2018) 6. Lu, J.J.: Exploration on the mixed teaching model of Japanese reading course under the background of big data. J. HUBEI Open Vocat. Coll. 32(6), 109–111 (2019) 7. Chen, S.Y.: Extended teaching of English reading course in senior high school. Engl. Campus 19(27), 123–124 (2018)
Application of Graphic Language Automatic Arrangement Algorithm in the Design of Visual Communication Zhengfang Ma(B) College of Fine Arts, Hohhot Minzu College, Hohhot 010051, Inner Mongolia, China
Abstract. Under the background of the era of reading pictures, the major of visual communication design shows a new development trend, which not only shows that the tentacles of the professional discipline continue to extend to multiple fields, but also greatly expands the communication channels. This paper mainly studies the application of graphic language automatic arrangement algorithm in visual communication. Based on the study of the characteristics of the Internet, the development of media and the popular trend of dynamic layout design, this paper summarizes the design principles, design methods and forms of dynamic layout. In this paper, based on the current research status of resource allocation algorithms in automatic scheduling, a graphical language automatic scheduling algorithm is proposed. On this basis, an optical grid resource allocation algorithm based on multimode and a service scheduling algorithm based on port resource saving of optical network based on multimode are proposed. Keywords: Graphic language · Visual communication · Automatic arrangement · Arrangement algorithm
1 Introduction With the rapid development of information technology and media, today’s society has already entered a world of cultural integration of the image era. In the past, the main communication mode of interpreting text information has been replaced by the image communication with intuitive expression. The reasons for the present situation are on the one hand, the regional and national limitations of language and characters; on the other hand, the unique characteristics of visual graphics transcend language barriers and national boundaries, which can directly and quickly spread and communicate various ideas and languages. Analyzed from the perspective of social development, the huge information flow generated every day in the information age directly leads to the fact that the simple traditional text media cannot be independently loaded. Therefore, it is an inevitable choice of historical development to take the image communication mode as the mainstream of information presentation. In the progress of information technology at the same time also promoted the development of the image transmission, this professional, for example, from the new media become the main medium of mass communication, © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 60–66, 2022. https://doi.org/10.1007/978-981-16-5857-0_8
Application of Graphic Language Automatic Arrangement Algorithm
61
visual communication design is showing an unprecedented active, positive by the twodimensional plane space to the multidimensional space, combines the static mode of transmission to action of multi-channel express [1]. Now no longer limited to the media of visual communication design, the transmission way more diverse, including not only traditional design content such as poster design, book design, print AD design, packaging design, etc., also includes based on digital media formed by the development in the field of information design, interaction design, dynamic image design and multimedia design, etc. In the context of numerous visual image expression channels, human beings generate tens of megabytes of image information every day. In the face of such a large capacity of information integration and communication, how to construct it reasonably to present an overall layout with illustrative, emotional and artistic beauty [2, 3]? Of visual graphic language layout related research abroad has a complete system of grammar, but because of its focus only from the Angle of the social communication of the new media era more modal meaning analysis of visual images, and do not involve the creation of visual design, so we can not directly as a design language coding theory to guide the [4]. In the scope of visual design language, foreign countries mostly discuss the use rules of a single design element, while the research on combinatorial rules is relatively lacking [4]. Although the domestic visual design language coding system has not yet formed a very complete theoretical system, it is worth noting that some scholars’ viewpoints in this field are of significance and value for in-depth study [5]. This paper refines and integrates valuable viewpoints at home and abroad, and on this basis improves the algorithm mechanism of automatic arrangement of visual graphic language, so as to develop a set of theoretical guidance for visual image creation with clear order and structure.
2 Application of Autoprogramming Algorithm for Graphic Language 2.1 Overview of the Arrangement and Design 2.1.1 Concept of Layout Design Layout design, namely layout design, is to arrange all elements in the process of graphic design on the layout in an organized and purposeful way, so as to quickly convey the layout information and arouse the attention of viewers [6]. Orchestration design is to make and establish an orderly spatial arrangement and composition. Layout design belongs to the category of visual communication design and has become an important part of art design. In the process of visual design, according to certain information and content, various visual elements such as graphics, text, color, etc. are selected and combined in accordance with functional and aesthetic requirements. At the same time, the layout principle and creativity are used to design, forming the layout rules and the layout with personalized characteristics. As the pace of life accelerates, people’s visual habits also change. With the advent of the Internet era, there is a new medium between information and users. With the development of society and changes in the market, media and demands are also changing. The denotation of choreography design is extensive and involves many fields. The narrow sense of layout design generally refers
62
Z. Ma
to the graphic design, such as product meeting, enterprise publicity, posters, calendars, greeting cards, packaging, newspapers, books, albums, envelopes, letters, business cards and other design; the broad sense of layout design includes three-dimensional (or even four-dimensional) design, such as web design, multimedia design, etc. Layout design is not a simple technical layout, but a high integration of technology and art [7, 8]. Picture elements play an important role in layout design. Picture in a broad sense includes photos, illustrations, tables and so on. Graphic elements mainly refer to the graphics obtained through abstract processing. The history of human using prison language to record information is long before the use of writing, which can be clearly proved from the evolution history of writing itself. Compared with the text in the layout design, the graphic has the characteristics of intuitionism and image in conveying information. With the advent of the Internet era, information transmission has been in the “era of picture reading”. The fast-paced social life has affected people’s reading habits, and readers have become more and more accustomed to obtaining information from the graphic, which is a relaxed and vivid information carrier. 2.1.2 Development of Layout Design The history of choreography is a part of the history of human civilization, which reflects the progress of science and technology. In the early stages of human civilization, there emerged effective ways to meet people’s needs for information dissemination and communication methods. The cuneiform writing in Mesopotamia culture is one of them. The local people used a special stick to write hieroglyphs, lines and symbols on clay tablets, and they already had a sense of arrangement. In addition, Egyptian papyrus documents, Chinese oracle bone inscriptions and Shang inscriptions, and Zhou Liangdai’s bronze inscriptions are all the earliest layout works of mankind [9]. With the continuous development of science and technology, mankind has entered the fifth information dissemination era – the digital information era. As the carrier of digital information transmission, network has occupied a dominant position. Format design have been given a new definition, and contains a variety of media such as visual elements and have hearing elements of design, is widely used for design of digital media design field expands unceasingly, the style of the design more diversification, liberalization, design method and technology as the technology is also in constant innovation, but also in the aspect of interdisciplinary bolder. The above changes have given rise to a new trend in the development of choreography [10, 11]. 2.2 Automatic Arrangement Algorithm Figure choreography model contains two, that is, data smoothing, regularization and loss data smoothing regularization is a graph Laplacian regularization, which represents the input similar to produce the output is also similar, such as the similar images to obtain grade is similar, the result of the loss of items used to guarantee the reorder and initial results do not have too big difference. Firstly, the physical meaning of the mathematical symbols used in the model is described. X = [x1, x2, …, xn] represents the characteristic data set of images, with a total of n images. Y = [y1, y2, …, yn]T is the initial sorting list of images, r = [r1, r2, …,
Application of Graphic Language Automatic Arrangement Algorithm
63
rn]T is the correlation sorting list of images, and W is the similarity matrix of images. Where wij represents the similarity of image xi and xj. The general graph ordering model can be formulated as: min Q(r, y, X ) = R(r, X ) + λL(r, y)
(1)
Where, R(r, X) is the data smoothing regularization term, for the purpose of scoring similar pictures similarly. L(r, y) is the loss function, which is used to constraining R and y. Y is a kind of supervision signal, which can be defined in two ways. One is that the user marks the picture according to the explicit feedback information. If the picture is related to the query word, the corresponding Yi is set as 1, otherwise as 0. The other is defined by implicit feedback from the user, that is, by query results based on keyword retrieval. α is the scale coefficient in the range of [0, 1]. There are two general forms of regular terms: (1) The Laplacian regular term R(r, X ) =
ij
Wij (ri − rj )2 = r T Lr
(2)
Where L = D − W is the graph Laplace matrix, D is the degree matrix, and each of its terms is the sum of the corresponding row or column elements of the similarity matrix W. (2) Normalized graph Laplacian regular term R(r, X ) =
ij
Wij
rj ri − √ Dii Djj
2 ˜ = r T Lr
(3)
Where L = I − D−1/2 WD−1/2 is the normalized graph Laplace matrix, and Dij is the sum of the ith row elements of the similar matrix W. Manifold sort calculates the sort score by constructing a manifold structure between the data. This method is a sorting algorithm based on semi-supervised graph theory, which can combine the label information of samples with the structure mining, so as to mine the internal structure relations of data in the sorting process. The core idea is to construct a weighted network, assign a score to each initial node, and the score of other nodes that need to be sorted is 0. Scores can be propagated between adjacent nodes, and each node is finally sorted according to its own score, with the higher the score, the higher the ranking.
3 Simulation Experiment The simulation test in this chapter is based on the Ubuntu system using Python programming language and PyTorch artificial intelligence algorithm library. The simulation server runs on an 8-core 2.20 GHz CPU and an NVIDIA GTX Titan XP GPU. Due to hardware limitations, the number of wavelengths available is set to 5. The bandwidth of
64
Z. Ma
each wavelength is 40 Gbps. In order to accommodate the network size and ensure the availability of wavelength resources, the attributes of all services are the same, and the required bandwidth of each service is 10 Gbps. In the algorithm, we use eight instances to run together in a multi-threaded manner. The image format is 8-bit grayscale, 112 × 112 pixels. On our GPU machine, the total number of steps per instance, T, is 1.5 * 107, which takes about 1 day. Root Meansquare Prop (RMSP) was used to optimize the Gradient Descent with a base learning rate of 7E−6. The source and target nodes of the business are randomly generated. This paper evaluates the performance of the algorithm from the following aspects to determine in which case the algorithm is more effective. First, based on the number of transactions, we evaluate the performance of the algorithm in the case of large number of transactions and small number of transactions. Secondly, we will consider several different key parameter algorithms for evaluation, such as the number and size of multimodal images.
4 Simulation Experiment Results 4.1 Resource Utilization Rate
160 140
Port number
120 100 80 60 40 20 0 10000
20000
40000
60000
80000
Number of training Algorithm 1
Our algorithm
Fig. 1. Resource utilization
As shown in Fig. 1, when the number of transactions is 100, the port changes are not significant. This is because the number of businesses is large and the current topology is small enough to hold all the services. In the topology used in the simulation, there are only 15 links. When we set up 5 wavelengths, there are at most 150 ports. As a result, the number of ports does not change significantly after most businesses have been successfully choreographed.
Application of Graphic Language Automatic Arrangement Algorithm
65
4.2 Port Quantity Comparison
Table 1. Port number comparison 20000 40000 60000 80000 100000 Single routing state
98
86
77
71
70
Double routing state
103
94
97
85
76
Large size multimode 104
82
71
68
66
110 105
Port number
100 95 90 85 80 75 70 65 60 20000
40000
60000
80000
100000
Number of training Single routing state
Double routing state
Large size multimode
Fig. 2. Port number comparison
Table 1 and Fig. 2 show the performance of the algorithm under different values of its main parameters, i.e., calculating the number of paths and image size. A path means that there is only one route. Two paths indicate that there are two routes. We found that algorithms with two routing methods require more training steps. The size of the multimodal image has little effect on the algorithm. After sufficient training steps, for service orchestration, the final results of the optimization of the three different parameters are similar at the number of services of 50. The time cost of the algorithm depends on the number and size of multimodal images.
5 Conclusions Visual communication design is one of the most important components of social significance in visual arts, and it bears the important mission of social mass communication.
66
Z. Ma
With the development and progress of digital technology, dynamic layout design has a more broad space for visual creative performance, dynamic graphic design of visual language in this vast space has played a unique artistic charm, can attract audience’s attention, a strong aesthetic effects, and resulting appeal and influence, make dynamic graphical design get more effective communication effect. In this paper, we propose a new idea to solve problems in a pictorial way by using multi-modal enhancement learning algorithm (MAR) graphics language to improve the efficiency of business orchestration. The proposed algorithm avoids the constraints of heuristic algorithms and directly expresses network information in the form of images.
References 1. Min, Ju, Park: Graphic narratives in comic strips and early animation. Korean J. Anim. 12(4), 39–58 (2016) 2. Serikoff, N.I., Frantsouzoff, S.A.: Arabic manuscript book traditions: script, space arrangement of the text and bibliographical description. Orientalistica 3(3), 591–618 (2020) 3. Yakhina, R.R., Afonina, E.V.: Functionality of foreign-language inclusions in Russianlanguage texts (on the material of modern media). Litera (5), 33–39 (2021) 4. Young, J., Bridgeman, M.B., Hermes-Desantis, E.R.: Presentation of scientific poster information: lessons learned from evaluating the impact of content arrangement and use of infographics. Curr. Pharm. Teach. Learn. 11(2), 204–210 (2019) 5. Gameson, R.: Graphic devices and the early decorated book. Library 19(3), 388–390 (2018) 6. Tyagi, S., Shukla, N., Kulkarni, S.: Optimal design of fixture layout in a multi-station assembly using highly optimized tolerance inspired heuristic. Appl. Math. Model. 40(11–12), 6134– 6147 (2016) 7. Qi, J., Yang, L., Gao, Y., et al.: Integrated multi-track station layout design and train scheduling models on railway corridors. Transp. Res. Part C Emerg. Technol. 69(Aug), 91–119 (2016) 8. Smith, C.J., Gilbert, M., Todd, I., Derguti, F.: Application of layout optimization to the design of additively manufactured metallic components. Struct. Multidiscip. Optim. 54(5), 1297–1313 (2016). https://doi.org/10.1007/s00158-016-1426-1 9. Wu, W., Fan, L., Liu, L., et al.: MIQP-based layout design for building interiors. Comput. Graph. Forum 37(2), 511–521 (2018) 10. Buja, G., Bertoluzzo, M., Dashora, H.K.: Lumped track layout design for dynamic wireless charging of electric vehicles. IEEE Trans. Ind. Electron. 63(10), 6631–6640 (2016) 11. Guo, Z., Li, B.: Evolutionary approach for spatial architecture layout design enhanced by an agent-based topology finding system. Front. Archit. Res. 6(001), 53–62 (2017)
Analysis on the Application of BP Algorithm in the Optimization Model of Logistics Network Flow Distribution Li Ma(B) Sichuan Vocational and Technical College, Suining 629000, Sichuan, China
Abstract. The logistics network consists of multiple logistics nodes and transportation routes connecting these nodes. Once the construction is completed, the network infrastructure and equipment will remain unchanged for a long time. The maximum logistics processing capacity of each line and each node will limit the size of the flow of goods, and then affect the flow distribution of the entire logistics network. With the increasingly obvious characteristics of logistics demand for multiple varieties and small batches, how to effectively reduce logistics costs has become a key issue in solving social and corporate problems. This paper analyzes the application of the BP algorithm in the logistics network flow distribution optimization model, and summarizes the constituent factors of the logistics network flow distribution problem through some related literature materials, and prepares for the following BP algorithm to optimize the logistics network flow distribution. Through BP the algorithm has drawn relevant conclusions on the logistics network optimization experiment. The experimental results show that the average flow rate at different flow rates is lower. The optimization scheme is lower than the other two schemes. When the weight of the cargo is the largest, the flow rate of the actual scheme reaches 16.77 than the optimization. The scheme is big. Keywords: BP algorithm · Logistics network · Flow allocation · Optimization model
1 Introductions The continuous emergence and development of socialist globalization of our country’s economy has promoted the birth and development of the modern logistics industry, and the rise and growth of the modern logistics industry has also promoted the further development of our country’s economic socialist globalization [1, 2]. Since our country has entered a new ten centuries, with the progress and rapid development of modern transportation tools and technology in our country [3, 4], the modern logistics industry is also developing rapidly. Modern logistics, which is representative of modern logistics tools and technologies, has begun to receive general attention and has been widely used in transportation, service and manufacturing fields [5, 6], it has become an important way and means to promote the adaptation of China’s market economy structure and the © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 67–74, 2022. https://doi.org/10.1007/978-981-16-5857-0_9
68
L. Ma
growth of administrative service scale. It has developed into a basic supporting industry for the sustained and healthy development of China’s national economy [7, 8]. Aiming at the research of logistics network flow distribution optimization model, artificial intelligence algorithm and its application in route optimization have received extensive attention. Its representative algorithms include genetic algorithms. The advantages of these algorithms are fewer constraints, no initial optimization conditions, etc., and their search function is more powerful than heuristic methods. Therefore, it has been widely used in route optimization problems [9]. Moreover, some researchers believe that because the supplier location problem will affect the subsequent transportation costs, it is necessary to combine the location and transfer problems into one problem in the research, create a general mathematical model, and optimize the solution to make it suitable for the actual situation, and create an optimal model that meets the needs [10]. Some researchers pointed out that most of the logistics and transportation problems studied involve two-tier logistics networks and simplify some limitations. The model is relatively simple, and there are still some gaps compared with actual problems. Because the actual warehousing and logistics network generally includes three or more customers-a customer level, but the higher the customer level, the more complicated the problem will be, and the more decision variables needed to solve this problem, and to solve this more difficult the problem. In addition, some traditional methods used to solve site design and optimization problems, such as design algorithms, heuristic algorithms, etc., also have certain limitations [11]. Some research staff take secondary logistics nodes and networks as the main research objects. Based on detailed analysis of the characteristics of logistics nodes and sub-networks at all levels, they have established a set of models for maximizing and optimizing logistics nodes and network traffic. The capacity constraints and impact costs between nodes and logistics nodes in the network are studied. The economic effect of total logistics node costs is defined as a mixed nonlinear flow function, and a two-layer genetic algorithm is given to solve this model [12]. This paper studies the application of BP algorithm in the logistics network flow distribution optimization model, and summarizes the influencing factors of logistics network flow through the method of literature research. Pave the way for the following, and then carry out the BP algorithm to optimize the logistics network path, and use the results to provide a basis for the optimization of logistics network flow distribution.
2 Research on Logistics Network Flow and BP Algorithm 2.1 The Constituent Elements of the Logistics Network Flow Distribution Problem (1) Goods Goods are the objects of logistics activities. Products include characteristics such as type, weight, volume, packaging, and when and where they need to be delivered. In the logistics network, the flow of goods in a certain direction within a certain period of time forms a flow of goods. The flow of goods includes elements such as flow direction and flow. The flow direction is the direction of the transportation of goods and the flow of the quantity of goods flowing in a specific direction. Using mathematical optimization methods and rationally distributing the flow can reduce
Analysis on the Application of BP Algorithm
69
the transportation distance, time and cost in the logistics network and improve the efficiency of the logistics network. (2) Logistics nodes Cargo transportation processing and other projects. According to the function of logistics, logistics nodes can be roughly subdivided into transportation nodes, warehousing nodes, collection and distribution nodes, and comprehensive distribution nodes. For a specific logistics network, the location of each logistics node can be determined or uncertain. Moreover, the node mode in logistics can be only one kind, or even include many kinds. The type of goods handled by the logistics node can roughly be said to be one or more. The quantity of goods can not only meet the requirements of all customers, but also only meet the needs of some customers. (3) Transportation routes Usually, physical quantities (such as cost, time, and distance) are used to evaluate transportation routes and call them transportation route weights. The weight of the transmission line can be fixed or variable: 1) Fixed, such as unit cost, time and distance. Will not change the flow through the transmission line. 2) Changes, such as cost, time, etc, will change due to different sports. According to the limitation of transportation capacity, the distribution requirements of transportation routes can be divided into the following two situations: 1) There is no limitation of transportation capacity; 2) The capacity of transportation routes and logistics nodes is restricted, that is, the quantity of goods to be transported will be limited by the capacity of the transportation routes and the logistics nodes connected to them at the same time. (4) Path In the logistics network, the route refers to the collection of correctly connected logistics nodes and transportation routes from the supply location to the demand location. The characteristics of the route include the direction of the route, the weight of the route, and the ability of the route. The route of the logistics network can be two-way or one-way; the weight and capacity of the route are determined by the weight of the logistics node and the transportation route that constitutes the route. The distribution of logistics in the logistics network is usually carried out through the route with the least weight (the optimal route) to transfer the goods from the supply location to the demand location. If there is no capacity limit, all products can be allocated in one route (the best route) at the same time. When considering the route capacity limitation, the cargo will be allocated to many optimal routes in turn according to the weight of the route. 2.2 BP Algorithm BP neural network is a multi-level feedforward neural network. The entire network is divided into three levels: input level, hidden level and output level. It is also a multi-level learning network. When the system has inputs, the input data will be transferred from the input layer unit to the hidden layer unit, and then after processing the hidden layer, they will be passed to the output level, thereby creating output data. This process is a layer-by-layer status update, which is called relay forwarding. If the output data does not match the expected data, the error will be transmitted backwards, and each connection weight will be corrected at each layer. For a given set of training data, the network
70
L. Ma
must be continuously trained in each training mode, and forwarding and errors must be repeated continuously. After learning, the neural network can start to work. When there is a data input set, the BP neural network will identify and judge unknown samples based on the conventional recognition and calculations obtained in the learning process. This is also a manifestation of the self-adaptation and self-learning capabilities of the BP neural network. (1) Training and recognition process of BP neural network 1) Network initialization. According to the input and output sequence (X, Y), determine the number of network input nodes n, the number of hidden layer nodes l, the number of output layer nodes m, the initial connection weight o; and ok, the initial hidden layer threshold A, the output layer threshold b, given the rate and activation function of the neuron. 2) The hidden layer output calculation, according to formula (1), the hidden layer output H is calculated n wij xi − aj ) j = 1, 2, · · · · · · , l (1) H =f( 1=1
3) Hidden layer output calculation Ok =
i j=1
Hj wjk − bk
k = 1, 2, · · · · · · , m
(2)
4) Error calculation ek = Yk − Ok
k = 1, 2, · · · · · · , m
(3)
3 Application of BP Algorithm in Route Optimization of Logistics Network 3.1 Source of Data This section takes the route optimization problem of the logistics transportation network of a frozen food sales company (hereinafter referred to as the “company”) in the city as an example, and uses the above models and algorithms to solve the problem. The company’s products are in strong demand in 19 cities across the country (Guangzhou, Jinan, Shanghai, Wuhan, Zhengzhou, Fuzhou, Guiyang, Hefei, Kunming, Lanzhou, Nanjing, Nanjing, Nanjing, Nanjing, Changsha, Chongqing), depending on the product’s geographic area differences, the company’s management has established five major distribution centers (Guangzhou, Jinan, Shanghai, Wuhan, Zhengzhou), and will determine the advantages of these five large-scale supply centers in terms of geographical conditions and other factors. It is an alternative shipping point. At present, taking into account the current actual situation, the company has formulated a set of transfer plans. The analysis in this section is based on the actual transfer plan to find a better transfer plan and through comparative analysis to improve enlightenment management.
Analysis on the Application of BP Algorithm
71
3.2 Related Parameters According to the actual operation of the company, the logistics network of this example has a total of 19 demand points, of which 5 nodes are both supply points and alternative transportation points (represented by the numbers 1, 2, …, 5). The remaining 14 nodes are pure demand points (represented by the numbers 7, 8, …, 19). Participate in the quotation of 5 types of products and buy them from 5 supply points, among which there are two nodes i, j (i = 1, 2, …, 6; j = 1, 2, …, 19). The total weight of the transported product (tons), the load is divided into four parts, respectively 0 < Qi ≤ 1T, 1 < Qi ≤ 4T, 4 < Qi < 10T, Qi > 10T. 3.3 Algorithm Solution Algorithm configuration: This section uses the following parameter calculation examples. The calculation formula of the evolutionary total size of the first-level bp algorithm is popsize = 19, the calculation formula of the evolutionary total algebra is maxgen = 9, the calculation formula of the evolutionary total size of the second-level bp algorithm is popsize1 = 19, and the calculation formula of the evolutionary algebra is maxgen1 = 1900. The first layer of the mutation bp algorithm is pc = 0.7, pm = 0.1, and the second layer is pc1 = 0.8, ŋ = 0.4. Use matlab 7.6.0 (r2009a) to apply the double-layer bp insertion calculation method recommended in this chapter, and then use 2.60 ghz cpu and 2.00 gb memory. The calculation example is solved in the wordpress operating system interface.
4 Result Analysis 4.1 The Average Flow of the Three Schemes in Different Flow Segments Through experiments, the average flow rates of different flow sections of the three schemes are obtained, and the results are shown in Table 1: Table 1. The average flow rate of the different flow sections of the three schemes All straight hair
Actual operation
Optimization
0 < Qi ≤ 1T
0.65
0.78
0.74
1 < Qi ≤ 4T
2.23
2.77
2.10
4 < Qi < 10T
6.37
6.77
5.74
14.16
16.77
13.34
Qi > 10T
It can be seen from Fig. 1 that the average flow in different flow sections is lower than the other two schemes in the optimized scheme. When the weight of the cargo is the largest, the flow of the actual scheme reaches 16.77 larger than the optimized scheme.
72
L. Ma
Average flow
All straight hair
18 16 14 12 10 8 6 4 2 0
Actual operation.
Optimization
16.77 14.16 13.34
2.1 2.77 2.23
0.78 0.65 0.74 0< Qi≤1T
6.77 6.37 5.74
1 < Qi≤4T
4< Qi;< 10T
Qi> 10T
traffic segmentation
Fig. 1. The average flow rate of the different flow sections of the three schemes
4.2 The Number of Lines in Different Traffic Sections of the Three Schemes Through experiments, the comparison of the number of lines in different flow sections of the three schemes is obtained, and the results are shown in Table 2: Table 2. The number of lines in different traffic sections of the three schemes All straight hair
Actual operation
Optimization
0 < Qi ≤ 1T
38
4
4
1 < Qi ≤ 4T
51
13
42
4 < Qi < 10T
28
31
36
9
21
16
Qi > 10T
It can be seen from Fig. 2 that in the actual scheme, the logistics flow is concentrated on 44 lines, which is less than the optimized scheme. The optimized scheme concentrates the flow on 74 lines. This shows that there is excessive collection of goods on the transportation route in actual operation, and this kind of excessive collection cannot achieve the minimization of transportation costs.
Analysis on the Application of BP Algorithm
0< Qi≤1T
1 < Qi≤4T
73
4< Qi;< 10T
140
Number of line
120 28
100 80
36
51
60 40
31
20
38
0 All straight hair
13 4 Actual operation.
42 4 Optimization
program Fig. 2. The number of lines in different traffic sections of the three schemes
5 Conclusions The work efficiency and benefits of the logistics network are directly determined by the network topology and the number of transmission lines, as well as the distribution of the number of logistics nodes and transmission lines in the network. Once the topology of the entire logistics network is determined, there will be no long-term changes, and the mobility of personnel and customers in the entire logistics network and its frequency and speed will be directly affected by many factors such as supply, demand, network capacity, etc. In practice, each logistics node and various logistics routes are combined by multiple criteria that can be evaluated. In the way that we carry out cargo transportation and network traffic distribution, we usually need to fully consider not only the cost of logistics, but also the time of delivery of the goods, the reliability of long-distance transportation and the reliability of the goods delivery network and other factors.
References 1. Cao, G.: Research on the application of artificial intelligence algorithm in logistics distribution route optimization. Paper Asia 34(5), 35–38 (2018) 2. Ahmad, A., Razali, S.F.M., Mohamed, Z.S., El-shafie, A.: The application of artificial bee colony and gravitational search algorithm in reservoir optimization. Water Resour. Manag. 30(7), 2497–2516 (2016). https://doi.org/10.1007/s11269-016-1304-z 3. Paolone, M., et al.: AC OPF in radial distribution networks - Part I: On the limits of the branch flow convexification and the alternating direction method of multipliers. Electr. Power Syst. Res. 143(Feb), 438–450 (2017)
74
L. Ma
4. Gu, J.J., Guo, P., Huang, G.H.: Achieving the objective of ecological planning for arid inland river basin under uncertainty based on ecological risk assessment. Stoch. Env. Res. Risk Assess. 30(5), 1485–1501 (2015). https://doi.org/10.1007/s00477-015-1159-5 5. Chen, T.: Equivalent permeability distribution for fractured porous rocks: the influence of fracture network properties. Geofluids 2020(1), 1–12 (2020) 6. Lu, W., Liu, M., Lin, S., et al.: Incremental-oriented ADMM for distributed optimal power flow with discrete variables in distribution networks. IEEE Trans. Smart Grid 10(6), 6320–6331 (2019) 7. Ma, J., Shen, L.X., Sheng, W.T.: Optimization for online open communication network channel allocation algorithm. Shenyang Gongye Daxue Xuebao/J. Shenyang Univ. Technol. 39(2), 193–197 (2017) 8. Gao, H., Liu, J., Shen, X., et al.: Optimal power flow research in active distribution network and its application examples. Proc. CSEE 37(6), 1634–1644 (2017) 9. Sunderland, K., Coppo, M., Conlon, M., et al.: A correction current injection method for power flow analysis of unbalanced multiple-grounded 4-wire distribution networks. Electr. Power Syst. Res. 132(Mar), 30–38 (2016) 10. Bastidas-León, E.W., Espinel-Ortiz, D.A., Romoleroux, K.: Population genetic analysis of two Polylepis microphylla (Wedd.) Bitter (Rosaceae) forests in Ecuador. Neotropical Biodiversity 7(1), 184–197 (2021) 11. Xing, X.U., Yuanzhi, L.I., Tian, K., et al.: Application of ACPSO-BP neural network in discriminating mine water inrush source. Chongqing Daxue Xuebao/J. Chongqing Univ. 41(6), 91–101 (2018) 12. Cai, L.J., Lv, S., Shi, K.B.: Application of an improved CHI feature selection algorithm. Discret. Dyn. Nat. Soc. 2021(3), 1–8 (2021)
Application of NMC System in Design Study Under the Background of Virtual Reality Technology Aiyun Yang(B) Hunan City University, Yiyang 413000, Hunan, China
Abstract. The purpose of teaching is to improve students’ enthusiasm, participation and ability training. The design and development of network multimedia courseware (NMC) will produce good teaching effects when used in a reasonable way in teaching. In the future, NMC will be the most critical technology in the teaching field. Art design professional teaching requires students’ ability to analyze three-dimensional space by them, and the use of virtual reality technology can meet the requirements of ability training in art design teaching. After studying the design and development of NMC and actual teaching cases, this paper puts forward the concept of the design and development of NMC involved in experimental model making courses, and explains the teaching purpose, construction content and teaching methods of the course. Rationally integrating this technology on the basis of the application of software and applying it to product design teaching courses can better meet the students’ subjective exploration needs and the cultivation of spatial modeling thinking ability. Research shows that 32.7% of students like flipped classrooms, 56.1% of students say they like it, and a very small number of students do not accept this teaching model. After sorting out the survey data, it is found that most students use multimedia courseware in art design teaching, and students believe that this teaching mode can help their learning. The above can fully prove that the class mode of using multimedia courseware in art design teaching is still highly recognized among students, that is, the application of NMC design and development in art design teaching is feasible. Keywords: NMC · Design and development · Art design · Teaching field
1 Introduction Increasing the development of technology in the field of teaching has been put on the agenda, and NMC is an important one, and its research and development has been urgent [1, 2]. In order to improve the overall teaching level of colleges and universities, the country has issued many policies in recent years, emphasizing the use of NMC to improve the quality of high-quality teaching in colleges and universities [3, 4]. Design teaching through the use of this course teaching, can improve students’ interest in the class, stimulate students’ sense of participation and learning enthusiasm [5, 6]. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 75–82, 2022. https://doi.org/10.1007/978-981-16-5857-0_10
76
A. Yang
The teaching methods provided by multimedia technology can make the teaching classroom environment more diverse. The national administrative authority controls the education policy, and the Ministry of Education, the Ministry of Industry and Information Technology, and the Development and Reform Commission all have active policy guidance on the development of NMC [7, 8]. In terms of the application level of multimedia network courseware, the more the competent authorities pay more attention, the network multimedia technology will become more and more in-depth in teaching. These guiding opinions and viewpoints emphasize the integration of the development of network multimedia and teaching field, and point out the future is the road of network multimedia teaching field, and expand new teaching methods for traditional teaching [9, 10]. This article describes the development of NMC and the characteristics of the design and development of NMC, and combines relevant knowledge to comprehensively analyze the design and development of NMC in actual teaching cases, using the design and development of NMC Domain knowledge and analysis of the advantages of the actual teaching case summary, targeted to propose a NMC curriculum construction program [11, 12].
2 Application of NMC Design and Development in Art Design Teaching Due to the openness, individualization, timeliness and shared nature of web-based courseware, the system design is more complex than that of stand-alone courseware and should address the design of the courseware architecture, screen interface design, design of different materials in multimedia courseware, design of navigation strategies, interaction design, etc. 2.1 Courseware System Structure The main function of the network courseware architecture design is to provide learners with an environment conducive to learning. The network courseware system structure design mainly has three forms: linear structure, tree structure and network structure. Web-based network courseware is paged. One page can present one or more knowledge points, and there are organic connections between each knowledge point. Each page must be linked according to the connection between the knowledge points, so it must be the selection of the structure between each page is the design of the network courseware system structure. In the teaching content system, a tree structure of “chapters, sections, and knowledge points” is adopted. In the teaching system, the form of a frame web page is adopted, and this design interface is adopted to make it easy for students to understand the hierarchical structure of knowledge, which mainly includes the content described by the instructor in the classroom. In the teaching evaluation system, three parts are designed: self-evaluation, student mutual evaluation and teacher evaluation. On the discussion system, chat rooms, BBS, and E-mail are designed. Students can freely express their opinions in different discussion areas and discuss their opinions. In the learning resource
Application of NMC System in Design Study
77
system, learning resources such as teaching videos, design genres and organizations, design companies and designers are set up. In the related link design system, due to the particularity of the subject, teachers should pay great attention to broadening the students’ horizons when designing courseware. Therefore, in addition to the knowledge involved in this courseware, some design websites can also be linked. 2.2 Screen Interface Design The design of the screen interface should follow the following aspects: First, we must follow the principle of balance. The layout of the screen interface should give people a sense of balance, and the design layout of colors and text should not be top-heavy and unbalanced. Second, we must follow the principle of consistency. For some pages with the same function, the layout style should be consistent in color and structural layout, otherwise it will cause learners to be confused about the interface or control. In terms of layout and text editing skills, it should be able to meet the reading habits of learners and provide reading comfort. The layout design should not span several pages, so that all text can be used without moving the horizontal scroll bar. In the use of colors, it is best to have no more than four colors in the same picture. 2.3 Design of Different Materials in Multimedia Courseware The material design in the online courseware includes: text, graphics, color, sound and video design. (1) Text design. Text elements are visual symbols, the thematic part of information transmission, and the main medium for visual communication. In multimedia network courseware, text is an important part of visual elements. The type, size, line spacing and color selection and reasonable combination of fonts will deeply affect the visual and operational effects of the courseware interface. The comparison of the size of the characters can produce a certain sense of master-subordinate rhythm and rhythm. Large font size can set off the theme and attractive sight. Some bold texts can be enlarged to form a point, line or surface, which can become an important part of the layout and play an active role in the layout. Therefore, it is very important to make good use of the size of Chinese characters in the layout. (2) Graphic design. Graphics can be understood as all figures and shapes except for the true reflection of the shape of things. Graphics is a key component in multimedia network courseware for art design teaching. In the graphic design of courseware, we need to understand the types of graphics, the format of graphics, the layout and design of pictures, and the matters to be noted in the design. One is the graphic type of the courseware, which includes background pictures, photos, buttons, and animations. Second, the most commonly used image format in courseware is the JPEG format. Third, the layout and design of pictures. In the courseware graphic design and layout, attention
78
A. Yang
should be paid to the placement of the pictures and the arrangement of the area of the pictures. (3) Design of sound and video. The sound in the courseware includes human voice, music and sound effects. The sound in the courseware should be easy to control (such as setting a music switch). The playback of the video must be controlled so that it can be watched repeatedly. The video quality should be good, clear and smooth. Video generally needs to be configured with sound. In the sound design of the courseware, it can be designed as a button that can be selected to play the teacher’s explanation. You can freely choose whether to listen to the teacher’s explanation. In the design of background music, it can also be designed as a controllable button, and learners can choose the sound according to their needs. In terms of video design, the design can choose to watch the size of the screen, and learners can choose according to their own visual experience.
3 Application Experiment of the Design and Development of NMC in Art Design Teaching 3.1 Survey Method Questionnaire survey method is a qualitative research method, commonly used in sociology and psychology research, its characteristic is to understand some characteristics of the whole by analyzing selected samples. 3.2 Survey Object The survey objects selected in this article are mainly four classes of students in a certain university in this city. There are 50 people in each class, and 50 questionnaires are distributed, a total of 200 questionnaires, the number of questionnaires returned is 189, and the questionnaire response rate is 94.5%. 3.3 Statistics This article uses SPSS 22.0 software to count and analyze the results of the questionnaire, and conduct a t test. The t-test formula used in this article is as follows: t= t=
X −μ
(1)
σ√X n
X1 − X2 (n1 −1)S12 +(n2 −1)S22 n1 +n2 −2
t=
d − μ0 √ sd / n
(2) ( n11
+
1 n2 )
(3)
Application of NMC System in Design Study
79
4 Application Experiment Analysis of the Design and Development of Network Multimedia Courseware in Art Design Teaching In terms of data processing, this article uses the SPSSAU system for data analysis. First, the sample size is greater than 50. Therefore, the KS method test is carried out. The test results show that the before and after evaluation results of the experimental class and the control class conform to the normal distribution, and then the test scores before and after the experiment were matched T-test. The T test result p = 0.000 < 0.05, there is a significant difference, indicating that the result is reliable. 4.1 Student Recognition of the Use of NMC in Art Design Teaching After the practice of using multimedia courseware in art design teaching, a survey of students’ preference for the classroom was conducted. The survey results are shown in Table 1. Table 1. Survey of students’ preference for new model Number of people Like
62
32.7%
106
56.1%
General
19
10.1%
Dislike
2
1.1%
Number of people
Number of people
Proportion
120
60.00%
100
50.00%
80
40.00%
60
30.00%
40
20.00%
20
10.00%
0
0.00% Like Very Much
Like
General
Dislike
Like degree
Fig. 1. Survey on the degree to which students like the new model
Propotion
Like Very Much
Proportion
80
A. Yang
It can be seen from the Fig. 1: 32.7% of students like flipped classroom very much, 56.1% of students say they like it, and a very small number of students do not accept this teaching mode. After sorting out the survey data, it is found that most students use multimedia courseware in art design teaching, and students believe that this teaching mode can help their learning. The above can fully prove that the class mode of using multimedia courseware in art design teaching is still highly recognized among students, that is, the application of NMC design and development in art design teaching is feasible. 4.2 Results of the Evaluation of the Improvement of Students’ Ability The application of NMC in art design teaching focuses on the cultivation of students’ application ability, which includes creativity and expressiveness. Aiming at the cultivation of these two abilities, the students who participated in the class were tested on the effect of ability improvement. The evaluation results are shown in Table 2. Table 2. Survey results of students’ creativity and expression ability Creativity Expressiveness Significantly improved 24%
21%
Slightly improved
46%
50%
Not clear
25%
23%
5%
6%
No improvement
Creativity
Expressiveness
70% 60%
Propotion
50% 40% 30% 20% 10% 0% Significantly improved
Slightly improved
Not clear
No improvement
Degree of improvement Fig. 2. Survey results of students’ creativity and expression ability
Application of NMC System in Design Study
81
As shown in Fig. 2, 24% of classmates think that their creativity has been significantly improved after flipped classroom learning. 46% of students think that their creativity has improved to a certain extent compared to before. But 25% of the students don’t know whether their creativity has improved. In terms of expressiveness, 21% of students’ expressiveness has improved significantly, 50% of students think that their expressiveness in art design has slightly improved, and 19% of students are not sure whether their expressiveness has improved. This survey result reflects that the application of NMC in art design teaching is beneficial to the cultivation of students’ creativity and expressiveness.
5 Conclusions For multimedia teaching courseware, visual elements are the most important source of information. However, after analyzing the collected multimedia teaching courseware, it is found that most courseware has improper application in visual art design. To make a comprehensive generalization in design, it is essential to make targeted design modifications to the problems that appear in the courseware. If the visual elements are not used properly, it will cause confusion in the interface of the courseware, and students will lose their interest in learning because they don’t know where the focus is. The development of the information age promotes the maturity of media technology. As an important auxiliary teaching method, multimedia teaching courseware will be applied to the development of various disciplines more extensively, systematically and scientifically. The application of multimedia teaching courseware in art design will become more and more standardized, presenting a prosperous scene. It is also full of expectations for the improvement of the aesthetics of visual art design in the multimedia teaching courseware of future teachers.
References 1. Zainuddin, N., Sahrir, M.S.: Multimedia courseware for teaching Arabic vocabulary: let’s learn from the experts. Univ. J. Educ. Res. 4(5), 1167–1172 (2016) 2. Xi, X.: Design and manufacture of PE multimedia network courseware under the WEB environment. Agro Food Ind. Hi Tech 28(1), 1621–1626 (2017) 3. Zhao, X., Liu, Y.: Research on the design and optimization of English situational teaching assisted by multimedia network platform. Revista de la Facultad de Ingenieria 32(9), 642–648 (2017) 4. Bilen, M., Isik, A.H., Yigit, T.: Development of web based courseware for artificial neural networks. Gazi Univ. J. Sci. 32(4), 1138–1148 (2019) 5. Yue, N.: Computer multimedia assisted English vocabulary teaching courseware. Int. J. Emerg. Technol. Learn. (iJET) 12(12), 67–78 (2017) 6. Stavropoulos, T.G., Koutitas, G., Vrakas, D., et al.: A smart university platform for building energy monitoring and savings. J. Ambient Intell. Smart Environ. 8(3), 301–323 (2016) 7. Lian, D., Yan, Y., Zheng, L., et al.: Design and development of the course design system for engineering geodesy. J. Geomatics 41(6), 95–99 (2016) 8. Onofrei, G., Ferry, P.: Reusable learning objects: a blended learning tool in teaching computer aided design to engineering undergraduates. Int. J. Educ. Manag. 34(10), 1559–1575 (2020)
82
A. Yang
9. Zhang, B., Rui, Z.: Application analysis of computer graphics and image aided design in art design teaching. Comput.-Aided Des. Appl. 18(S4), 13–24 (2021) 10. Gao, Y.: Blended teaching strategies for art design major courses in colleges. Int. J. Emerg. Technol. Learn. (iJET) 15(24), 145 (2020) 11. Liu, F., Yang, K.: Exploration on the teaching mode of contemporary art computer aided design centered on creativity. Comput.-Aided Des. Appl. 19(S1), 105–116 (2021) 12. Budge, K.: Learning to be: the modelling of art and design practice in university art and design teaching. Int. J. Art Des. Educ. 35(2), 243–258 (2016)
The Robot Welding Training Assistant System Based on Particle Swarm Algorithm Yigang Cui(B) College of Petroleum Equipment and Mechanical and Electrical Engineering, Dongying Vocational College, Dongying 257091, Shandong, China
Abstract. Welding has become more and more widely used in industry. In the past, manual welding and semi-automatic welding methods commonly used in industry were gradually replaced by welding robots due to the difficulty of guaranteeing welding quality or lack of versatility. Welding robots have broken through the defects of manual welding, improving production efficiency, and reduced labor costs. The demand in the field of industrial manufacturing automation is increasing. In this paper, the particle swarm algorithm is applied to the welding robot. The purpose is to study the robot welding training auxiliary system of the particle swarm algorithm. Taking the vision system as an example. The performance of the material welding equipment is first studied and analyzed, and then the problems were found, such as the stability of the droplet transfer, the control of the welding power source, etc. implement specific experimental methods. Finally, data analysis is obtained from the experiment. Experimental research shows that the accuracy of weld tracking is related to the rate of weld change. The greater the rate of weld change, the lower the accuracy of weld tracking and the more unstable the weld tracking. In addition, it is also related to the accuracy of weld position detection. The lower the welding seam position detection accuracy, the lower the welding seam tracking accuracy, and the more unstable the welding seam tracking. Taking the vision system as an example, it is found that the calibration error of the vision system is not only related to the hardware parameters of the camera itself, but also related to the calibration method, but the average error of the feature point recognition results is within 0.2 mm. Keywords: Particle swarm algorithm · Robot welding · Weld seam · Vision system
1 Introduction With the gradual disappearance of China’s demographic “dividend”, robots have become a development trend to replace humans. Because welding robots can replace humans for dangerous operations, they can greatly increase welding speed and ensure welding quality, and thus occupy a large proportion in the application of industrial robots. In our country, users’ awareness of robots is constantly improving, and my country’s robot © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 83–90, 2022. https://doi.org/10.1007/978-981-16-5857-0_11
84
Y. Cui
system integrators continue to develop and mature, and more and more industries are beginning to use welding robots. In the research of the robot welding training auxiliary system based on particle swarm algorithm, many scholars have conducted research on it, and have achieved good results. For example, Liu Y proposed a method of discretization of robot welding based on offline programming. In order to make the robot move according to the specified curve trajectory, it needs to be discretized to generate the robot’s motion instructions [1]. Wang XW proposed that welding is widely used in modern industry. In order to shorten the search for the appropriate path, the path planning of the welding robot is simplified to the optimization problem of the welding sequence. Through the study of discrete Levy flight and discrete PSO algorithm, the Levy-PSO algorithm is proposed to solve the optimal solution of the welding path [2]. Delice Y proposed a new improved particle swarm optimization algorithm with negative knowledge for the problem of the two-sided pipeline balance of the hybrid model [3]. The key to the use of welding robots is to create a balanced, fast, and high-quality welding, because welding is a highly nonlinear, multivariable, and interactive process of various uncertain factors. In order to overcome the influence of the above factors on welding quality and meet the requirements of welding manufacturing automation, the main research contents of this article include: particle swarm algorithm analysis, construction of a distributed twin-wire machine welding system, design of a digital welding power source based on DSC, and research on welding stability Strategies and experimenting with the vision system.
2 Research on Particle Swarm Algorithm 2.1 The Concept and Characteristics of Particle Swarm Optimization (1) Basic definition of particle swarm algorithm The particle swarm algorithm is one of the swarm intelligence calculation methods with typical characteristics. The particle swarm algorithm (PSO) is randomly initialized as a group of particles (random solution), and each particle is based on its own historical best position (individual extreme value) P, and the best position in the group history (global extreme value) P2 to update their speed and position. Let Xi = (Xi 1, Xi 2,…, X) be the current position of particle i, Vi = (Vi 1,Vi 2,…,V…) be the current flight speed of particle i, P = (Pi 1, Pi 2,…,Pa) is the best position experienced by the particle i, and Pg = (Pg 1, Pg 2,…, Pg m) is the best position of the group experience obtained by comparing all particles. The particle completes the evolution of its own speed and position according to the evolution Eq. (1) and (2). The evolution equation of the basic particle swarm algorithm is as follows: Vi (t+1)=Vi (t)+c1 rand1 ()(Pi − Xi (t)+c2 rand2 (Pg − Xi (t))Xi
(1)
Xi (t+1)=Xi (t)+Vi (t+1)
(2)
The Robot Welding Training Assistant System
85
(2) Advantages of particle swarm algorithm 1) Starting from randomly generating initial search points. The dynamic change makes the individual extreme value Pbesci and the group extreme value Pbest particle appearing positions gradually approach the position of the optimal solution. 2) A good balance between global and local search capabilities. Not only does the current flying speed manifest in the particles, but it also indicates the amount of change, and the ability to search will change the convergence. Around the maximum of a single particle, the latter ensures that the search is gradually approaching in terms of the global optimum. The former one will expand the scope of spatial search and lead to increased diversity [4]. 3) The optimized particle swarm algorithm with simple processing is also suitable for solving discrete problems, as well as binary integer programming problems including continuous and discrete variables. It is not only limited to dealing with continuous problems. 4) Perform optimization processing in the solution space, and PSO uses its own fitness function. Therefore, problems such as non-differentiable objective functions can be easily handled. 5) A stochastic optimization algorithm is also a natural rule. Able to search in relatively difficult and complex areas. Makes its robustness better. 6) Random initial value, so the solution obtained is not related to the initial state. Simple to understand and easy to implement. 2.2 Robot Welding Design (1) Welding equipment Commonly used welding equipment mainly includes hand-held welding equipment, semi-automatic welding equipment and welding robots. The way workers hold welding equipment for work is the most traditional welding method, which is convenient, flexible and cheap. However, the efficiency and accuracy of welding are greatly related to factors such as the welding worker’s operating level and working status, and it is impossible to ensure that the quality of welding is always in a stable state [5]. In addition, the harsh working environment and repeated work methods have made fewer and fewer people willing to engage in welding for a long time. Semi-automatic welding equipment is mainly for specific welding tasks, such as orbital welding machines. This type of equipment has a relatively high degree of automation but lacks versatility, and can only be used for welding in a certain scenario. It solves the shortcomings of manual welding, improves production efficiency, reduces labor costs, and opens up new production methods, making welding robots more and more demanding in the field of industrial manufacturing automation. Welding is more and more widely used in industry. In the past, manual welding and semi-automatic welding methods commonly used in the industry were gradually replaced by welding robots due to the difficulty of guaranteeing welding quality or lack of versatility.
86
Y. Cui
(2) Realize automatic welding mode The welding robot realizes welding automation, such as teaching reproduction mode, pre-setting the actual welding path and welding parameters, and then the robot performs welding according to the welding path input by the previous teaching [6]. In this way, the welding path of the welding robot cannot be changed. Once it is changed, it is necessary to teach and program the welding seam trajectory again. However, in actual welding applications, due to workpiece clamping, workpiece machining error, workpiece deformation during welding, etc., the actual position of the weld seam deviates from the position before welding seam tracking teaching and programming during welding. Thereby affecting the quality of welding. On the other hand, when the number of workpieces is small and the teaching of workpieces is complicated, the efficiency of this method is greatly reduced.
3 Experimental Research on Robot Welding Training Auxiliary System Based on Particle Swarm Algorithm 3.1 Research on Technology of Material Welding Performance Analysis (1) In view of the problem that hydrogen holes are prone to appear in the weld, the following measures are taken: 1) Check the source of hydrogen, strictly control the moisture content of the welding material to ensure the quality of the material, and it must be dried before use. 2) In order to control the input of welding heat, it is necessary to adopt smaller current and voltage specifications to shorten the life of the melting tank and reduce the hydrogen in the atmosphere. At the same time, it is necessary to fully ensure that the roots are completely integrated to ensure that the coatings maintain a certain degree Temperature, slowly cooling, in order to facilitate the appearance of bubbles in the root oxide film [7]. (2) Welds are prone to cracks, and the following measures should be taken: The main thing is to select suitable welding materials and adjust the alloy composition. The composition of the alloy system has a great stimulating effect on the occurrence of cracks. (3) In view of the problem of low weld strength, take the following measures: Choose appropriate welding technology and excellent welding equipment to ensure the surrounding environment and avoid weak points that affect strength. The greater the welding heat input, the more obvious the mechanical properties of the weld will decrease. 3.2 Specific Implementation Methods Distributed control has been widely used in complex automation control systems by virtue of its advantages. Distributed control mode is a symbol of manufacturing system
The Robot Welding Training Assistant System
87
informatization and a representative of flexible automation equipment [8]. In this chapter, by analyzing the definition and composition of the distributed control system, the overall structure of the distributed control wave-controlled twin-wire robot welding system is designed, and the overall design of the specific equipment is carried out. This paper uses a distributed control method to design a distributed wave-controlled twin-wire robot welding system. Each unit in the welding system is an independent intelligent node, which can solve local problems independently. At the same time, each intelligent node unit has a communication function and the problem of collaborative work with other nodes can be solved through it [9]. Each unit in the system has strong flexibility, the system can truly achieve parallel control, and real-time performance can be guaranteed. Through the high-speed bus, various welding equipment can achieve a high degree of information sharing and circulation. Through the high sharing of welding status information, the welding system can quickly respond to external disturbances, thereby improving the performance and welding effect of the welding system. In order to achieve good welding results in the twin-wire wave-controlled welding robot, the control system of the welding power source must have excellent performance. In the double-wire wave-controlled welding robot system, the welding power source is the core unit of the whole system. The quality of welding is mainly determined by the welding power source in the system. Designing a digital welding power source with good performance is an important part of the system construction [10]. Since welding is a complex system, it is a system that mixes strong current and weak current, and analog and digital signals. Traditional digital welding power supplies based on MCU or DSP have their own shortcomings. For example, MCU is biased towards digital control, signal processing ability is weak, power control precision is insufficient, DSP is biased towards digital processing, control ability is insufficient. Therefore, this system adopts DSC (Digital Signal Controller), which combines the advantages of MCU and DSP, as the control core of the power supply system. In the welding power supply, this design uses a Hall current sensor to sample the size of the welding current [6]. An important advantage of the Hall sensor is that its current measurement method is non-contact, and its measurement accuracy is high, the output signal linearity is good, the ability to resist external electromagnetic interference is good, and the installation is convenient. For the stability of droplet transfer in single-wire wave-controlled welding, the median pulse current waveform is used in this paper. The welding current parameters of the median waveform mainly include peak current, base current, median current, pulse current frequency, peak current duty cycle and median current duty cycle. Design experiments for these parameters separately, so as to obtain the best combination of parameters to achieve stable droplet transition and good weld formation. The experimental conditions are: 15L/min gas flow, 8mm thick steel plate, 1.2mm carbon steel welding wire, and surfacing method [5].
88
Y. Cui
4 Experimental Analysis on the Research of Robot Welding Training Auxiliary System Based on Particle Swarm Algorithm 4.1 Analysis of Identification Error of Weld Feature Points In order to test the accuracy and stability of the method proposed in this paper, this paper processed the images of two different workpieces, and compared the intelligent results with the measurement results of artificial naked eyes to judge the feature points, as shown in Table 1. Table 1. Analysis of machine results and manual visual measurements Experimental workpiece Weld characteristics
Detection result Manual measurement
5 mm V-groove
Width of V-groove of left 316.8
316
25 mm V-groove
Width of V-groove of left 315.5
315
WIDTH
5mm V-groove Width of V-groove of left
317 316.5 316 315.5 315 314.5 314
25mm V-groove Width of V-groove of left 316.8 316 315.5 315
detection result
Manual measurement
MANNUAL & DETECTION MEASUREMENT Fig. 1. Analysis of machine results and manual visual measurements
As shown in Fig. 1, it can be seen that the feature point position error obtained by this processing method is relatively low, which belongs to the sub-pixel level. The average error of the feature point recognition results is within 0.2 mm. 4.2 Test Data for Vision System Calibration Test the calibration results, pick up the measurement points in the calibration module of the host computer, get the pixel distance between the measurement points, substitute it into the formula, get the calibration distance between the measurement points’, and record the actual distance at the same time.
The Robot Welding Training Assistant System
89
Table 2. Calibration test data of vision system Actual distance (mm)
(mm)
Calibration distance (mm)
Error (mm)
29
10.2
30.37
0.37
28
10.4
20.19
0.19
24
11.3
20.2
0.2
24
13.3
30.26
0.26
25
14.7
30.33
0.33
24
14.6
40.44
0.44
28
14.2
30.36
0.36
36
14.5
40.21
0.21
Actualdistance(mm)
(mm)
Calibrationdistance
error(mm)
50 40
40.44
40.21
Distances
36
30
30.37 29
20.19
24 20.2
10.2
10.4
11.3
0.37
0.19
0.2
20 10 0
0
1
28
2
3
4
30.36 28
30.26
30.33
24
25
24
14.7
14.6
14.2
14.5
0.33
0.44
0.36
0.21
13.3
0.26 5
Times
6
7
8
9
Fig. 2. Test data of vision system calibration
According to the test results shown in Table 2, the measurement accuracy using this calibration method can reach 0.4mm. The error of the vision system calibration is not only related to the hardware parameters of the camera itself, but also related to the calibration method. The main reason for the error is that the rectangular area replaces the ideal calibration plane, but as long as the rectangular area is selected reasonably, the calibration result can be controlled within the required accuracy (Fig. 2).
90
Y. Cui
5 Conclusions This paper studies the particle swarm algorithm, discusses the advantages of particle swarm algorithm, and analyzes the performance and problems of welding robots to conduct weld tracking experiments. Experiments show that the accuracy of weld tracking is related to the rate of weld change. The greater the rate of weld change, the lower the accuracy of weld tracking and the more unstable the weld tracking. In addition, it is also related to the detection accuracy of weld position The lower the seam position detection accuracy, the lower the welding seam tracking accuracy, and the more unstable the welding seam tracking. All in all, through the analysis and arrangement of the internal system logic modules and power control system of the welding robot used in the current industrial industry, we have a deeper understanding of the internal connection form of this type of robot and the corresponding development tools.
References 1. Liu, Y., Tang, Q., Tian, X.: A discrete method of sphere-pipe intersecting curve for robot welding by offline programming. Robot. Comput.-Integr. Manuf. 57, 404–411 (2019) 2. Wang, X.W., Yan, Y.X., Gu, X.S.: Welding robot path planning based on Levy-PSO. Kongzhi yu Juece/Control Decis. 32(2), 373–377 (2017) 3. Delice, Y., Aydo˘gan, E.K., Özcan, U., ˙Ilkay, M.S.: A modified particle swarm optimization algorithm to mixed-model two-sided assembly line balancing. Journal of Intelligent Manufacturing 28(1), 23–36 (2014). https://doi.org/10.1007/s10845-014-0959-7 4. Wang, Z., Zhang, K., Chen, Y., et al.: A real-time weld line detection for derusting wallclimbing robot using dual cameras. J. Manuf. Process. 27, 76–86 (2017) 5. Fatih, T., Murat, O., Mehmet, M.: Computer vision system approach in colour measurements of foods: part I. development of methodology. Food Sci. Technol. 36(2), 382–388 (2016) 6. Chao, M., Zhang, Z.W., Huang, Y.F., et al.: A fast automated vision system for container corner casting recognition. J. Mar. Sci. Technol. 24(1), 54–60 (2016) 7. Jain, N.K., Nangia, U., Jain, J.: A review of particle swarm optimization. J. Inst. Eng. (India): Ser. B 99(4), 407–411 (2018). https://doi.org/10.1007/s40031-018-0323-y 8. Tang, Y., Guan, X.: Parameter estimation for time-delay chaotic system by particle swarm optimization. Chaos, Solitons Fractals 40(3), 1391–1398 (2017) 9. Peng, L., Xu, D., Zhou, Z., et al.: Stochastic optimal operation of microgrid based on chaotic binary particle swarm optimization. IEEE Trans. Smart Grid 7(1), 66–73 (2017) 10. Minghui, W.U., Haijun, H., Xianwei, W.: Robot welding path planning based on improved ant colony algorithm. Hanjie Xuebao/Trans. China Weld. Inst. 39(10), 113–118 (2018)
A Systematic Study of Chinese Adolescents Self-cognition Based on Big Data Analysis Chunyu Hou(B) Heilongjiang School of Agricultural Economics, Mudanjiang, Heilongjiang, China
Abstract. Self-recognition plays an important role throughout life, especially during the transitional period: adolescence. The establishment and improvement of self-cognition play a guiding role in adolescent’s behavior choice. This paper mainly studies the self-cognitive ability system of Chinese teenagers based on big data analysis. This paper adopts the method of cluster sampling, selects students from three middle schools in this city, gives out self-perception scale on the spot, and uses big data analysis technology to conduct data analysis. According to the analysis results, the higher the education level of parents, the higher the selfcognition score of teenagers; the higher the per capita monthly income of the family, the higher the level of adolescents’ self-cognition. This paper enriches the assessment methods of adolescents’ self-cognition level, provides an effective and practical measuring tool for the effective assessment of adolescents’ selfcognition level, and contributes to the data-driven implementation of mental health management for adolescents. Keywords: Big data analysis · Chinese adolescents · Self-cognition · Behavioral cognition
1 Introduction The measurement of self-perception is relatively complex, especially when it comes to the special age of teenagers. However, there is an indisputable premise on adolescent selfcognition, that is, the construction is not mature compared with adults. The establishment of self-cognition itself is a gradual process, which is gradually perfected and formed with the growth of life experience [1]. Adolescents are the key period for the construction of self-cognition, and their contact behaviors and environment play an important role in the achievement of self-awareness and the perfection of self-cognition behaviors. The whole self, or the self, consists of self cognition, self evaluation and self control. The level of the overall self means the difference of the overall function of the self. The higher the level of the overall self is, the truer the subject self’s understanding and evaluation of the object self is, and the more consistent it is with the objective evaluation, as well as the stronger its grasp and control over itself. For individuals of the same age, the higher the degree of overall self-cognition, the more positive the individual is and the higher the level of mental health is [1]. In the process of improving the self-concept, I gradually © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 91–97, 2022. https://doi.org/10.1007/978-981-16-5857-0_12
92
C. Hou
realized the key elements in the process of establishing self-cognition, and enriched the measurement standard and evaluation mechanism of self-cognition. To sum up, in social life, individuals form their own cognition and evaluation through their own observation. Found in both at home and abroad combing self-knowledge relevant literature, selfawareness and self definition of the concept of the presence of large amounts of overlap, meaning, research of the concept of self more widely than self study, researchers believe that in the study of literature itself clear self-cognition, concept of individual learning, life, work has very important significance, it can effectively increase personal happiness and sense of worth. After James proposed self-cognition, foreign scholars, especially psychology researchers, conducted more researches on self-cognition. Schwenkler argued that self-knowledge should address many situations in which the first-person perspective is not so superior, and explained the importance of self-knowledge for a person’s social and psychological health [2, 3]. Levy’s research found that holding on to impersonal self-knowledge, that is, knowledge of a person’s state or personality through a third person, may help to re-establish control. This paper provides a practical tool to effectively evaluate the level of adolescents’ self-cognition and an evaluation scale for schools and parents.
2 Self - Cognition of Teenagers Based on Big Data Analysis 2.1 Big Data Analysis Technology (1) Definition of big data Big data refers to the collection of data of huge magnitude, which is usually characterized by large amount of data and diversified format. Big data plays an important role in social development. It is an objective existence that reflects the real situation of the society and the most authentic data indicator of social development [4]. Different industries and different applications have different big data. For example, video surveillance can obtain massive video big data; Huge remote sensing big data and POI big data can be obtained from geographical analysis. E-commerce platforms have access to huge amounts of data about buyers and sellers, and so on. These seemingly useless data are actually of great value, and we can use them to get the most factual information that can lay the foundation for better decisions. (2) Big data analysis At present, big data has been applied to different degrees in many aspects. For example, when Wal-Mart conducts product promotion and discount, it carries out real-time pricing of products, and the pricing strategy is calculated and adjusted in real time through demand and inventory [5]. European gambling forecasting platform, this platform through the analysis of historical data, built a prediction model of gambling, the model can predict the current gambling results and the trend of lottery buyers, so that it can make a steady profit. In addition, some hedge funds in the financial industry trade on the basis of real-time search engine guidance, using natural language processing and artificial intelligence technology, essentially machine learning and data mining technology based on big data. Big data can predict the future based on historical data. The most common function of big data is to use historical data to provide material for machine learning
A Systematic Study of Chinese Adolescents Self-cognition
93
algorithm, obtain data data model through algorithm, and then apply the model to existing data, so as to obtain the prediction result of existing data [6]. This standardized process makes the most basic function of big data is to predict the future. Big data can be effectively applied to business. Business is the main battlefield of big data. The development of big data is due to the profit-seeking behavior of businessmen. Only when there is a profit to be made will people continue to study and collect big data. Large enterprises use accumulated historical data to build marketing models and user portraits of various groups, so that these large enterprises can effectively control the direction of advertising, control costs and make reasonable allocation of resources [7]. Big data can shine in healthcare. A famous doctor is actually a big data system, and after the training of medical experience and knowledge system, the big data can mass produce automated famous doctors, thus enabling Hua Tuo to live. At the same time, it can also predict people’s “future diseases” based on the big data of human health and promote the early prevention and treatment of diseases. 2.2 Self - Cognitive Theory (1) Cognitive behavioral theory Cognitive behavioral theory is mainly composed of cognitive theory and behavioral theory, but the two are not simply pieced together, but in the process of development, they promote and integrate each other to form cognitive behavioral theory [8, 9]. For the intervention study of troubled adolescents, cognitive behavioral therapy not only attaches importance to the correction of their bad behaviors, but also attaches importance to their cognitive mode and the harmony of cognition-affectionbehavior. There are four stages of cognitive behavioral theory. The first stage is the focus and method of estimation. He believes that any thoughts, emotions and behaviors can be specific and clear, and the client’s problems can be estimated in the way of reliability and validity. The second stage is the definition of professional relationship. He believes that the professional definition of social worker and client includes guidance, supervision and empathy and tolerance. He believes that social workers should guide clients to learn to change their misperceptions and enhance their coping behaviors. In the process of service, social workers should adopt a caring, empathetic and supportive attitude and an inclusive and interactive way to make clients believe that they have enough confidence and ability to make changes. The third stage is the stage of interventional therapy. In the process of interventional therapy, the client is firstly assisted to change the original wrong cognition, and then guided to learn positive coping behaviors, in order to shape a normal ideology and behavior pattern. The fourth stage is case closure and follow-up [10, 11]. (2) Influencing factors of self-cognition Self-awareness is not only closely related to the personality development of adolescents, but also closely related to their life state, academic performance, social interaction, mental health and so on. The perfection of various social functions of adolescents is
94
C. Hou
closely related to the formation of self-awareness. Teenagers are susceptible to various factors in their growth process, which will affect their behavioral ability, learning and living conditions, and affect the shaping of their personality. 1) Parents’ education level Adolescents live with their parents day and night since childhood, and family environment will greatly affect the development of adolescent self-awareness. Parents will have a subtle influence on the development of teenagers in all aspects. The more educated parents are, the greater the influence on the values of teenagers will be, which is conducive to the healthy development of teenagers’ self-awareness. Secondly, the rationality of the concept of parental rearing is an indispensable key factor for the healthy growth of adolescents, which will directly affect the healthy development of adolescents’ self-awareness. 2) Household income Studies have shown that the higher the family income, the higher the level of selfawareness among teenagers. This may be because families with higher incomes will provide more social resources and cultural activities for teenagers, while families with lower incomes will provide relatively poor resources and environment for teenagers. On the other hand, in families with a certain economic basis, parents pay more attention to communication with teenagers and will spend more effort to meet their reasonable needs. 3) Religious influence China has a large number of ethnic groups, and the development of the east and west is extremely uneven. Different ethnic groups have certain differences in belief, customs and living habits. Some ethnic minorities live in areas with poor environment and relatively backward economy, which may affect the development of adolescents’ self-awareness.
3 Self - Cognitive Questionnaire 3.1 Self - Cognitive Scale The self-cognition scale adopted in this paper was developed by American psychologists, which is not only used for clinical research, but also used as a screening tool in the investigation. Whether the scale contains 80 type test, could be divided into six dimensions, self consciousness includes behavior, intelligence and school situation, the body and appearance attributes, anxiety, fit in with the case on the basis of the standard answer, happiness and satisfaction score 0, 1 and the total score of the sum total of 80 test scores for the scale, the higher the score, show that adolescent self consciousness level, the better. 3.2 Respondents This paper selects three middle schools in the city, and randomly gives out self-cognition scale to the students in the three middle schools. A total of 300 scales were distributed, and 272 were obtained after the removal of invalid scales, with an effective rate of 90.66%.
A Systematic Study of Chinese Adolescents Self-cognition
95
3.3 Data Analysis After data entry and careful verification, SPSS20.0 was used for statistical analysis. Self-awareness scores and their various dimensions (behavior, intelligence and school, physical and physical attributes, anxiety, sociability, happiness and satisfaction) were all expressed as mean ± standard deviation. Independent sample t test was used; P < 0.05 was considered statistically significant. The following are the relevant formulas used in the t-test: t= t=
X −μ
(1)
√σ x n−1
x1 − x2
(2)
σx2 +σx2 −2γ σX 1 σX 2 1 2 n−1
4 Questiquestionnaire Results 4.1 Parent Education Level
Table 1. A comparison of self-cognition among adolescents with different educational levels Behavior Intelligence Shape Anxiety Collective Happiness Total score Primary
11.95
9.15
6.75
9.72
8.99
6.96
50.31
Junior high 11.72 school
10.03
6.81
9.62
8.08
7.54
53.8
High school
12.71
10.11
7.51
9.62
8.90
7.66
56.51
University
13.19
11.61
8.97
10.73
9.82
8.72
63.04
As shown in Table 1 and Fig. 1, the education level of teenagers’ parents has an influence on the self-cognition score of teenagers (P < 0.01), which is specifically manifested in behavior (P < 0.01), intelligence and school performance (P < 0.01), body and appearance attributes (P < 0.01), anxiety (P < 0.01), The scores of sociability (P < 0.01), happiness and satisfaction (P < 0.01) and total self-awareness (P < 0.01), and the higher the education level of parents, the higher the self-awareness scores of adolescents. Among them, the gap between parents’ educational attainment is large in two stages: primary school to junior high school and junior high school to university.
96
C. Hou 70 60
Score
50 40 30 20 10 0 Behavior Intelligence
Shape
Anxiety
Collective Happiness Total score
Project Primary
Junior high school
High school
University
Fig. 1. A comparison of self-cognition among adolescents with different educational levels
60
Value
50 40 30 20 10 0 Behavior Intelligence
Shape
Anxiety
Collective Happiness Total score
Project 2000
Fig. 2. A comparison of adolescents’ self-awareness in families with different per capita monthly income
4.2 Monthly Per Capita Household Income As shown in Fig. 2, influenced by family per capita monthly income, adolescents’ selfcognition scores are different, including behavior (P < 0.05), intelligence and school performance (P < 0.05), physical and physical attributes (P < 0.05), anxiety (P < 0.05),
A Systematic Study of Chinese Adolescents Self-cognition
97
sociability (P < 0.05), Happiness and satisfaction (P < 0.05) and the total score of selfawareness (P < 0.05), and the higher the per capita monthly income of the family, the higher the self-awareness score of adolescents.
5 Conclusions With the rapid increase of data volume and the requirement of data processing time, big data technology emerges with its powerful computing power. This paper uses big data analysis technology to conduct data statistics and data analysis on adolescents’ selfcognition system. Through data analysis results, it can be concluded that adolescents’ self-cognition level is closely related to their parents’ education level and their family’s per capita monthly income. This study is based on a survey results, has a certain reference value. In the further investigation, part of the respondents will be sampled again in order to comprehensively understand the self-cognition of teenagers.
References 1. Glanville, R., Shumack, K.: Conversations with the self-knowledge creation for designing. Kybernetes 36(9/10), 1515–1528 (2017) 2. Schwenkler, J.: Self-knowledge and Its Limits. J. Moral Philos. 15(1), 85–95 (2018) 3. Levy, N.: ‘My name is Joe and I’m an alcoholic’: addiction, self-knowledge and the dangers of rationalism. Mind Lang. 31(3), 265–276 (2016) 4. Wei, Y., Pan, D., Taleb, T., et al.: An unlicensed taxi identification model based on big data analysis. IEEE Trans. Intell. Transp. Syst. 17(6), 1703–1713 (2016) 5. Tawalbeh, L.A., Mehmood, R., Benkhelifa, E., et al.: Mobile cloud computing model and big data analysis for healthcare applications. IEEE Access 4(99), 6171–6180 (2017) 6. Zhe, L., Choo, K., Zhao, M.: Practical-oriented protocols for privacy-preserving outsourced big data analysis: challenges and future research directions. Comput. Secur. 69, 97–113 (2016) 7. Zhang, J., Huang, M.L.: Density approach: a new model for BigData analysis and visualization. Concurr. Comput.: Pract. Exp. 28(3), 661–673 (2016) 8. Kaya, S., Avci, R.: Effects of cognitive-behavioral-theory-based skill-training on university students’ future anxiety and trait anxiety. Eur. J. Educ. Res. (EJER) 16(66), 1–30 (2016) 9. Abramowitz, J.S., Blakey, S.M., Reuman, L., et al.: New directions in the cognitive-behavioral treatment of OCD: theory, research, and practice. Behav. Ther. 49(3), 311–322 (2018) 10. Kll, A., Shafran, R., Lindegaard, T., et al.: A common elements approach to the development of a modular cognitive behavioral theory for chronic loneliness. J. Consult. Clin. Psychol. 88(3), 260–282 (2020) 11. Minton, E.A., Cornwell, T.B., Kahle, L.R.: A theoretical review of consumer priming: Prospective theory, retrospective theory, and the affective–behavioral–cognitive model. J. Consum. Behav. 16(4), 309–321 (2017)
Financial Management Risk Control Based on Decision Tree Algorithm Yuan Li1 and Juan Chen2,3(B) 1 Graduate School, Jose Rizal University, 1552 Mandaluyong, Metro Manila, Philippines 2 Yunnan Technology and Business University, Yanglin Vocational Education Park Area,
Kunming, Yunnan, China 3 Graduate School, Cavite State University, 4122 Indang, Cavite, Philippines
Abstract. With the rapid development of global economic integration, the total world economy has doubled. The company wants to improve the market competitiveness and carry out financial activities from investment and financing to maximize the company value. This paper summarizes the concept of decision tree and financial risk management and control, finds that financial risk has the characteristics of objectivity, comprehensiveness, complexity and duality, and puts forward the objectives of financial risk management and control. The results show that: in 2021, the number of financial companies in China will continue to grow rapidly, 111 new financial companies will be set up, the number of financial institutions will continue to maintain double-digit growth, and the number of industry institutions will reach 679 by the end of the year. Keywords: Decision tree algorithm · Financial management · Risk control · Financial risk control
1 Introduction With the rapid development of information technology and the rapid rise of “Internet plus”, it also indicates the deepening of the reform of state-owned enterprises and the coming of the rapid development of social economy. At the same time, the competition among enterprises is becoming increasingly fierce. Therefore, it is necessary to control and control the financial management risk of enterprises. With the development of science and technology, many experts have studied the risk control of financial management. For example, some domestic teams have studied the financial risk control of enterprise groups, and C4.5 decision tree algorithm is introduced in the modeling process. And the financial risk model of listed companies is analyzed. The paper analyzes the financial management and decision-making of enterprises, establishes the financial management model according to the data mining results, and then establishes a complete financial decision system. In order to improve the ability of financial distress prediction, a method of combining decision tree and genetic algorithm is proposed to realize the dynamic selection of financial ratio in the process of modeling. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 98–105, 2022. https://doi.org/10.1007/978-981-16-5857-0_13
Financial Management Risk Control Based on Decision Tree Algorithm
99
The genetic algorithm is used to optimize the financial ratio set, which makes the final decision tree model of financial distress prediction achieve a good balance between accuracy and generalization. A decision system of selecting the technically feasible optimal cog under the condition of price uncertainty is proposed. The system uses the traditional discounted cash flow method and the modern real option evaluation method based on simulation to evaluate the alternative scheme. Then, the traditional expectation criteria and multi criteria ranking system are used to sort the evaluation strategies of the two evaluation methods. In the multi criteria ranking system, besides the expectation, the random order representing the additional profit, minimizing loss and achieving the predetermined development strategy goal is considered. Taking sunkun copper mine as an example, the system is verified. In order to better assess the advantages of alternatives, the rankings were made under high (major economy) and low price conditions [1]. Some experts have studied the risk control of financial and telecommunication operation enterprises. According to Sinclair and jerrum, a Montecarlo method of Markov chain is proposed, which can overcome this problem, provided that some (possibly very inaccurate) lattices and information can be used. This part of information is used to guide sampling, similar to the traditional importance sampling method. The key difference is that the algorithm allows backtracking on the lattice point, thus minimizing the deviation of importance sampling in the way of “self correction”. A mathematical programming model is proposed and applied to credit classification. The mcqp model only needs to solve the linear equations to obtain the global optimal solution, and the calculation efficiency is high. Kernel function is introduced into the model to solve nonlinear problems. In addition, the theoretical relationship between mcqp model and support vector machine is discussed. This paper introduces the impact of institutional change on the hedging ratio and hedging performance of palm oil market [2]. In order to detect the regional deviation of mean and variance of sequences, Bai and peron algorithm are used, and the iterative cumulative sum of squares of inclan and Tiao is adjusted. The analysis further transfers these institutions to the process of volatility clustering modeling, and estimates the minimum variance hedge ratio and risk minimization. By considering these institutional changes in the process of volatility clustering estimation, the proportion of spot position being hedged is estimated more accurately, and the better hedging performance results are given. In the absence of institutional change, the model stipulates that CPO participants need to rebalance their hedging ratio more frequently than institutional change models. A multi-stage linear stochastic programming model is proposed. The model optimizes the bond issuance by minimizing the average financing cost, controlling the leverage rate to an acceptable level and controlling the bankruptcy risk at an acceptable level. Three different independent models are tested, which generate BP diagrams as the output of risk analysis. The project quantifies the relative sensitivity of three BP models to different inputs, enhances the understanding of the factors affecting landscape scale BP, and improves the BP model for quantitative risk analysis. Devices, computer media, and methods for supporting consumer health needs by processing input data. The integrated health management platform can obtain the multi-dimensional input data of consumers, determine the health trajectory prediction value from the multi-dimensional input data, and determine the opportunity target of consumers according to the predicted value of health track, so as to support the management of health care [3]. Although the research
100
Y. Li and J. Chen
results of financial management risk management and control are quite abundant, there are still some deficiencies in the research of financial management risk management and control based on decision tree algorithm. In order to study the risk control of financial management based on decision tree algorithm, this paper studies decision tree algorithm and financial management risk control, and finds the ID3 algorithm of decision tree. The results show that the decision tree algorithm is beneficial to the study of financial management risk control.
2 Method 2.1 Concept of Decision Tree Algorithm Decision tree algorithm is one of the most commonly used methods in data mining technology. Decision tree classification has been widely used in the field of data mining because of its fast speed, high accuracy, intuitive and easy to understand, simple generation method and many other advantages, plus its good scalability and shrinkage performance, as well as rich data types [4]. In the process of cluster analysis, the selection of classification number is very important [5]. If the number of categories is too small, the number of indicators is too small. Although there are significant differences among the indicators, the selected financial risk evaluation indicators can not more comprehensively describe the financial risk of enterprises, and the information coverage is too small. Of course, if the number of classification is too much, it can not achieve the effect of dimension reduction [6]. Here, cluster analysis and correlation analysis are used to exclude highly correlated index variables. Algorithm is the core of data mining model, according to the learning process can be divided into guided learning algorithm and unsupervised learning algorithm. Guided learning is generally used in classification problems, and its purpose is to realize the classification and prediction of new data. The establishment of classification prediction model is the learning of previous data [7]. What happened in the past is a fact, which can be used to guide the establishment and evaluation of the model. Unsupervised learning algorithm is generally used to analyze and judge the internal relationship and structure of data information, because it is impossible to know the relationship and structure of data information in advance, so the whole learning process is unsupervised [8]. 2.2 Financial Management Risk Control (1) The concept of financial risk control Financial risk management and control refers to the company in the business process, to identify and evaluate various risks, and take reasonable and effective measures to weaken the risk, to manage and control the company’s risks, to ensure that the company’s economic interests will not be lost. The higher the level of management and control, the lower the risk of loss and the higher the final income of the company [9]. (2) Characteristics of financial risk Financial risk exists objectively, and it will not be eliminated because of the change of subjective consciousness of managers or others. Corporate financial risk is
Financial Management Risk Control Based on Decision Tree Algorithm
101
affected by both internal and external environment. Financial risk runs through the whole process of the company’s production and operation. Therefore, when the company is faced with financial risk, the company managers should analyze the financial risk more objectively to control the financial risk [10]. However, no matter what measures the company managers take, they can not completely eliminate the risks existing in the company, which reflects that the financial risks are objective and real. The comprehensiveness of financial risk mainly means that the financial risk is not caused by a certain link of the company’s production and operation, but is accompanied by the whole process of the company’s production and operation, which is reflected in all links of the company’s operation process. On the one hand, the causes of financial risk are complex; On the other hand, the manifestation and influence of financial risk are complex. The duality of financial risk refers to loss and profit. Generally speaking, the company has risks, which may bring losses and profits [11]. (3) Financial risk control objectives In the process of production and operation of the company, due to the influence of external and internal environmental factors, financial risks appear, causing more or less losses to the company. The company’s management should strengthen the management and control of the company’s risks, improve the awareness of management and control, and discover potential risks in time, so as to reduce the probability of risk occurrence and the loss to the company, so as to make the company obtain the maximum profit in production and operation [12]. 2.3 ID3 Algorithm of Decision Tree Among the typical new decision tree algorithms, ID3 algorithm is the earliest and most influential one. Information entropy and information gain are two core concepts of ID3 algorithm. As a greedy algorithm, the choice of multiple attributes depends on the speed of entropy from high to low. From the perspective of gain, the choice of each attribute must be based on the maximum gain. As shown in formula (1): m pi log2 (pi ) (1) I(S1 , S2 , ......Sm ) = − i−1
The distance between the evaluation score of grade 3 index and the score of the most perfect target is d +, the distance between the evaluation score of the third level index and the worst target score is D − 1, and the independent variable x is the ratio of the sum of D − 1 and D − 1 and D+ as shown in formula (2): yr =
∝i f (xi ) =
∝i
Di−
Di+ + Di−
(2)
The second level index is obtained by weight weighting. Finally, through multiple weighted sum, the evaluation score Z of financial control index of each main business sector and the evaluation score Z of the overall financial control index of enterprise group are obtained. Bring index scores into equations, as shown in formula (3): below z = f(y) = by (3)
102
Y. Li and J. Chen
Clustering analysis is an effective way to solve these problems. Without any background knowledge, it can automatically group a batch of data according to its own differences in some characteristics, so that the individuals in the group have very large similarity in these characteristics, while the similarity between the individuals in the group is relatively small, as shown in formula (4): βr yr (4) zr = f(yr ) =
3 Experience 3.1 Extraction of Experimental Objects Bagging algorithm is an algorithm based on put back sampling method in probability theory. The main idea is to put the extracted samples back to the original data set after each sample extraction, so that the size of each sample set is the same. The samples extracted at any time are trained to get the same classifier with multiple parameter differences. Finally, classifiers are combined by majority voting to get the expected prediction results. The algorithm mainly solves the problem of small sample data. Bagging method has good performance for unstable learning algorithms such as neural network, decision tree and support vector machine. The instability here means that when the training samples used for model construction change slightly, the classification model will change significantly, resulting in great differences in classification results. Therefore, by training the same unstable algorithm, the neural network or decision tree can get different training sets and different classifiers by each backward extraction. The results of different classifiers are very different, that is, the built-in parameters of neural network and other algorithms have changed, resulting in great changes in the internal operation of the model, and the results are naturally very different. 3.2 Experimental Analysis First of all, although simple oversampling technique is easy to use in structure and program compilation, the incidence of overfitting increases with the increase of high repetition rate and simple copy frequency. Although many oversampling techniques have been improved, they are not just virtual resampling; secondly, with the decrease of negative samples, some important information is ignored, which reduces the overall performance of the model; thirdly, the sampling technology that can be applied to the analysis and processing of sample data by single classifier is not widely used, and there are many limitations, so the stability of the results after operation is not high.
4 Discussion 4.1 Distribution of Financial Companies in China’s Enterprise Groups Since 2005, the CBRC has approved the establishment of 6–7 financial companies every year, and approved the establishment of Sino foreign joint venture financial companies
Financial Management Risk Control Based on Decision Tree Algorithm
103
and foreign-funded financial companies. At the same time, China Banking Regulatory Commission has also carried out industry restructuring to reduce the number of high-risk financial companies. Generally speaking, with the promotion of the CBRC, the scale and strength of financial companies have been rapidly improved, and the whole industry has entered a normal stage of standardized development. See Table 1 for the number of corporate financial companies of enterprise groups in China. Table 1. Number of corporate bodies of financial companies of enterprise groups from 2017 to 2021 Particular year Number 2017
356
2018
384
2019
493
2020
568
2021
679
data
It can be seen from the above that in 2017, there were 356 enterprise group financial companies in China, 384 in 2018, 493 in 2019, 568 in 2020 and 679 in 2021. The results are shown in Fig. 1. 1000 500 0 2017
2018
2019
2020
2021
particular year number Fig. 1. Number of corporate bodies of financial companies of enterprise groups from 2017 to 2021
From the above, in 2021, the number of financial companies in China will continue to maintain rapid growth, 111 new financial companies will continue to be set up, the number of financial institutions will continue to maintain double-digit growth, and the number of industry institutions will reach 679 by the end of the year. 4.2 Forms of Industrial Distribution in China The industrial layout of China’s financial companies is diversified. The first, second and third industries have been established, but the second industry is the main industry. As shown in Table 2.
104
Y. Li and J. Chen
Table 2. Industrial distribution of corporate bodies of financial companies of enterprise groups in 2021 Type
Percentage
Primary industry
23%
The secondary industry 37% The tertiary industry
40%
It can be seen from the above that the distribution of the primary industry of China’s financial companies accounts for 23%, the distribution of the secondary industry of China’s financial companies accounts for 37%, and the distribution of the tertiary industry of China’s financial companies accounts for 40% (Fig. 2).
40% 23% 37%
primary industry
the secondary industry
the terary industry
Fig. 2. Industrial distribution of corporate bodies of financial companies of enterprise groups in 2021
From the ownership form of financial companies, state-owned financial companies account for an absolute proportion. In 2021, the total number of institutions accounts for 40% of the total.
5 Conclusion In the current context of big data, decision algorithm is widely used in financial management. The decision-making model proposed in this paper can well analyze the financial situation of enterprises and realize the functions of financial analysis and decisionmaking management. In the stage of enterprise financial diagnosis and decision-making, enterprise managers should not only pay attention to the financial management situation reflected in the financial statements, but also comprehensively consider the changes of the receiving enterprise’s environment and specific operating conditions.
References 1. Jun, C., Jin, P.: Financial management and decision of decision tree algorithm based on data mining. IPPTA Q. J. Indian Pulp Paper Tech. Assoc. 30(8), 70–74 (2018)
Financial Management Risk Control Based on Decision Tree Algorithm
105
2. Zhai, S.: Research on enterprise financial management and decision making based on decision tree algorithm. Boletin Tecnico/Tech. Bull. 55(15), 166–173 (2017) 3. Rosati, R., Romeo, L., Goday, C.A., Menga, T., Frontoni, E.: Machine learning in capital markets: decision support system for outcome analysis. IEEE Access, PP(99), 1 (2020) 4. Moon, M., Lee, S.K.: Applying of decision tree analysis to risk factors associated with pressure ulcers in long-term care facilities. Healthc Inform Res, 23(1), 43–52 (2017) 5. Podhorska, I., Vrbka, J., Lazaroiu, G., Kovacova, M.: Innovations in financial management: recursive prediction model based on decision trees. Mark. Manag. Innov. 3, 276–292 (2020) 6. Zhao, Y.: Research on personal credit evaluation of internet finance based on blockchain and decision tree algorithm. EURASIP J. Wirel. Commun. Netw. 2020(1), 1–12 (2020). https:// doi.org/10.1186/s13638-020-01819-w 7. Howard, C., Hernandez, M.K., Livingood, R., Calongne, C.: Applications of decision tree analytics on semi-structured north Atlantic tropical cyclone forecasts. Int. J. Sociotechnol. Knowl. Dev. 11(2), 31–53 (2019) 8. Das, S., Padhy, S.: A novel hybrid model using teaching–learning-based optimization and a support vector machine for commodity futures index forecasting. Int. J. Mach. Learn. Cybern. 9(1), 97–111 (2015). https://doi.org/10.1007/s13042-015-0359-0 9. Nithyashree, D., Ramya, B., Rohith, V., Birundha, R.: Plant disease detection using decision tree algorithm and automated disease cure. Int. J. Eng. Technol. 07(3), 1834–1838 (2020) 10. Gotardo, M.A.: Using decision tree algorithm to predict student performance. Indian J. Sci. Technol. 12(8), 1–8 (2019) 11. Benediktus, N., Oetama, R.S.: The decision tree c5.0 classification algorithm for predicting student academic performance. Jurnal Ultimatics 12(1), 14–19 (2020) 12. Xu, C., Shiina, T.: Market risk control in investment decisions. In: Risk Management in Finance and Logistics. TSS, vol. 14, pp. 35–57. Springer, Singapore (2018). https://doi.org/ 10.1007/978-981-13-0317-3_3
Analysis and Design of Construction Engineering Bid Evaluation Considering Fuzzy Clustering Algorithm Shanshan Deng and Lijun Zhang(B) Chongqing Telecommunication Polytechnic College, Chongqing 402247, China
Abstract. Construction engineering bid evaluation is a multi-attribute decisionmaking evaluation. In order to prevent the negative influence of individual poor indicators from being neutralized by other indicators, strengthen the synergistic effect of indicators, and improve the rationality of decision-making, fuzzy clustering algorithm is adopted. On the basis of λ synergy degree, the standardized processing of the decision matrix is completed, and the scheme correction factor model is established, and the scheme is selected on this basis. The effectiveness and feasibility of the fuzzy clustering algorithm are illustrated by a calculation example. Keywords: Synergy degree · Proposal modification factor · Fuzzy clustering algorithm · Engineering bid evaluation
1 Introduction My country’s project bidding has become the first choice and the main transaction method in the engineering construction market, and the online electronic bidding evaluation system has also emerged [1, 2]. However, most bid evaluation methods of the bidding evaluation system still adopt the traditional single-item evaluation method, comprehensive evaluation method, low bid price method, composite bid base method, two-stage evaluation method, etc. [3, 4], and the most widely used method is the quantitative comprehensive evaluation method. (Also called scoring method, percent method). The bid evaluation experts score each bidder on various evaluation indicators according to the evaluation criteria [5, 6], and then average or weighted average the scores given by the experts to calculate the score of each bidder, thereby determining the winning bidder [7]. The subjective factors of this method of evaluation and scoring have a greater impact. Fuzzy clustering algorithm is a combination of qualitative and quantitative decision analysis method [8, 9]. It has a wide range of practicality for decision analysis of various types of problems. It has been widely used in the society, economy, and engineering circles at home and abroad. However, the use of this analysis method for evaluation has its shortcomings [10]. When different experts give the weight of the evaluation matrix, there is also a certain degree of subjectivity. On the basis of analytic hierarchy process, © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 106–112, 2022. https://doi.org/10.1007/978-981-16-5857-0_14
Analysis and Design of Construction Engineering Bid Evaluation
107
using fuzzy cluster analysis can cluster the results of each expert’s project evaluation, and get the final evaluation results, making the final decision more convincing. Bidding and bidding for construction projects is the main form of competition for construction activities under the market economy. The scientific and objective decisionmaking method for bid evaluation is to ensure the open, fair and impartial selection of the bid-winning unit, an important link to protect the interests of the bidding and bidding parties, and it is related to the success of the bidding work. The evaluation of project targets involves many factors. Currently, the methods commonly used in construction project bidding evaluation and decision-making are mainly single-factor evaluation and comprehensive evaluation methods. The comprehensive evaluation methods adopt analytic hierarchy process, fuzzy evaluation, artificial neural network, gray clustering, the multi-factor comprehensive judgment of TOPSIS and other means to complete the evaluation of project targets has achieved good results. In order to further improve the quality of evaluation, many scholars have discussed and corrected the deficiencies of various evaluation methods. This paper tries to use the fuzzy clustering algorithm based on the degree of synergy to prevent the negative influence of individual poor indicators from being neutralized by other indicators, and to improve the rationality of decision-making. The fuzzy clustering algorithm strengthens the synergy effect of the evaluation index. Based on the λ synergy degree, it completes the standardized processing of the decision matrix, establishes the project correction factor model, and selects the best project on this basis. An example is used to illustrate the effectiveness and feasibility of the fuzzy clustering algorithm, which provides a feasible mathematical model for the data processing of project bidding decision-making.
2 Analysis of Construction Engineering Bid Evaluation Fuzzy clustering algorithm evaluation system generally consists of target layer, criterion layer and scheme layer. The bid evaluation of construction projects generally requires comprehensive consideration of the project quotation, project duration, three materials consumption, project quality, construction technology, safety guarantee, corporate reputation, relationship factors and other factors to determine ranking of the comprehensive competitiveness of various bidders. Next, a multi-level hierarchical structure model is established for each element that constitutes the decision-making problem (see Fig. 1). 2.1 Principle of Fuzzy Clustering Algorithm The objective function of fuzzy clustering is as follows: min Jp (R, V ) =
c n
(rik )q xk − Vi 2
k=1 i=1
s.t.
c i=1
rik = 1, rik ∈ [0, 1],
n
(1) rik > 0
k=1
Where xk − Vi 2 is the distance between the k-th sample and the i-th class, and rik is the weight of the k-th sample belonging to the i-th class. Find the rik , Vi that makes
108
S. Deng and L. Zhang
Fig. 1. Bid evaluation index system for construction projects
J reach the minimum, that is, consider both the distance and the weight belonging to a certain class. The preceding sum sign indicates the weighted sum of the k-th sample belonging to a certain class. Clustering criterion: Find an appropriate fuzzy classification matrix R and clustering center vector V so that the objective function reaches a minimum. It has been proved by mathematical methods that when q ≥ 1 and xk = Vi, the optimal solution can be obtained by iteration, and the calculation process is convergent. The classification effect of q = 2 is good, and the calculation is easier. 2.2 Construct the Decision Matrix of the Fuzzy Clustering Algorithm Arrange n kinds of bidding schemes according to the feasible construction organization method of construction project to form the alternative set of schemes, and take the domain of discussion as the set of various feasible scheme factor sets: U = (U1 , U2 , U3 , … Un ); the selected scheme Quantitative indicators such as total bid price, construction period and qualitative indicators such as construction technical measures and corporate social reputation constitute an indicator set V = (V1 , V2 , V3 , … Vm ), where the quantitative indicators directly adopt the specific value of the bidding plan; the qualitative index that is hard to be described from the quantitative perspective adopts the scoring method of the expert percentage system to quantify the qualitative index. Each index can be regarded as a fuzzy subset of the evaluation universe U. Since the importance of each index in V for the final comprehensive evaluation of the best plan is different, that is, the influence of each index on the choice of the plan has different weight effects, and the fuzzy subset W on the index set V is formed, which is called the weight vector W = (W1 , W2 , W3 ,… Wm ), the weight vector expression needs to be normalized calculation, that is, satisfy W = 1. Construct the decision matrix A of the fuzzy clustering algorithm.
Analysis and Design of Construction Engineering Bid Evaluation
109
2.3 Standardized Processing of Decision Matrix Based On Synergy Based on the idea of rewarding good and punishing bad, the linear transformation operator Zjλ of λ synergy degree is used to normalize the transformation of the decision matrix. λ Zj = min aij + λ max aij − min aij , 0.5 ≤ λ ≤ 1. j
j
j
If aij is a benefit index rij =
aij − Zjλ Zjλ − min aij
(2)
Zjλ − aij Zjλ − min aij
(3)
i
rij =
j
The superior attribute value obtained by the standardized decision matrix becomes a positive value, and the inferior attribute value becomes a negative value. 2.4 Selection of Construction Engineering Bid Evaluation Analysis Plan (1) Determine ideal solution and negative ideal solution r + = {max ri1 , max ri2 , · · · , max rin }, r − = {min ri1 , min ri2 , · · · , min rin } (2) Calculation of group benefit value, individual regret value and benefit ratio + rj − rij Mi = wij + rj − rj−
+ rj − rij Ni = max wij + j rj − rj− Qi = v
Mi − M − Ni − N − + − v) (1 M+ − M− N+ − N−
(4)
(5)
(6) (7)
v is the coefficient of the decision-making mechanism of the “majority criterion” strategy. When v > 0.5, it means that the decision is made based on the opinions of the majority; when v = 0.5, it means that the decision is made based on the situation of approval; when v < 0.5, it means that the decision is made based on the situation of rejection. This article takes v = 0.5. Mi represents the group benefits of the alternatives, the smaller the Mi value, the greater the group benefits; the Ni value represents individual regrets, the smaller the Ni value, the smaller the individual regrets. Under the condition that the benefit threshold is satisfied and the scheme is stable, the schemes can be sorted according to the size of the Qi value, and the scheme with the smallest Qi value is the optimal scheme.
110
S. Deng and L. Zhang
3 Engineering Example Application In the project evaluation system, select seven indicators including project quotation, construction period, project unit price, project quality assurance system, project quality in the past 5 years, and construction organization design as participating factors. Table 1 shows the indicators and quantitative values of each factor of the engineering technical plan. Directly use the specific values of the target to directly substitute in, and the indicators that cannot be quantified are converted into a percentile system through expert scoring. Table 1. Technical and economic index values of the bidding project Index
Bidders Company A
Company B
Company C
Company D
Quotation (ten thousand yuan)
892.53
928.87
906.76
951.65
Construction period (d)
195
190
185
167
Unit price of main items
80
86
84
81
Engineering Quality Assurance System
87
83
76
87
Project quality in the past 5 years
78
80
80
79
Construction design
78
86
83
90
Total construction area of similar projects
15
10.2
8.7
17.2
3.1 Standardized Processing of Decision Matrix Based On Synergy For the technical and economic index data of the bidding scheme in Table 1, formulas (1), (2), (3) are used to complete the standardized processing of constructing the decision matrix. ⎤ ⎡ 1.00 −0.25 −1.00 0.25 −1.00 −1.00 −0.07 ⎢ 0.23 −0.03 0.25 −0.20 0.25 −0.17 −0.78 ⎥ ⎥ R=⎢ ⎣ 0.70 0.20 0.17 −1.00 0.25 −0.48 −1.00 ⎦ −0.25 1.00 −0.79 0.29 −0.38 0.25
0.25
3.2 Calculation of the Correction Factor and Comprehensive Weight of Each Candidate Scheme Index ⎡ ⎤ 2.718 0.779 0.367 1.284 0.368 1.284 0.929 ⎢ 1.261 0.973 1.284 0.815 1.284 0.619 0.458 ⎥ ⎥ U =⎢ ⎣ 2.012 1.217 1.185 0.368 1.284 0.846 0.368 ⎦ 0.779 2.718 0.453 1.284 0.687 0.368 1.284
Analysis and Design of Construction Engineering Bid Evaluation
111
According to the literature [4], the weight vector value of each evaluation index is wij = (0.30, 0.10, 0.07, 0.10, 0.05, 0.22, 0.16); formulas (5), (6), (7) are used to correct the factors by the normalization scheme And the index weight, the comprehensive weight of the attribute is calculated. ⎡ ⎤ 0.015 0.005 0.004 0.005 0.002 0.011 0.008 ⎢ 0.336 0.026 0.018 0.026 0.013 0.057 0.041 ⎥ ⎥ wij = ⎢ ⎣ 0.038 0.012 0.009 0.012 0.006 0.028 0.026 ⎦ 0.169 0.056 0.039 0.056 0.028 0.124 0.090 3.3 Scheme Selection Based on Fuzzy Clustering Algorithm to Construct An Ideal Scheme Construct the ideal solution: r + = (1.00, 1.00, 0.25, 0.25, 0.25, 0.25, 0.25). Negative ideal plan: r − = (−0.25, −0.25, −1.00, −1.00, −1.00, −1.00, −1.00). Calculate group benefit value Mi, individual regret value Ni and benefit ratio Qi according to formulas (5), (6), (7). Table 2. Fuzzy clustering algorithm estimated evaluation value of bidding scheme M
N
Q
Company A 0.023 0.011 0.000 Company B 0.290 0.207 1.000 Company C 0.065 0.020 0.102 Company D 0.215 0.169 0.7626
According to the calculation results, the fuzzy clustering algorithm’s program selection is analyzed, and the program rankings are A, C, D, and B. Fuzzy clustering algorithm analysis provides more accurate ranking data and obtains more complete information for program evaluation (Table 2).
4 Conclusions With the help of fuzzy clustering algorithm based on the degree of synergy, this paper establishes a new evaluation method for project targets, which provides a new way for the evaluation of engineering projects. The negative impact of individual poor indicators shall be prevented from being neutralized by other indicators, and the rationality of decision-making be improved. The fuzzy clustering algorithm strengthens the synergy effect of the evaluation index. Based on the λ synergy degree, it completes the standardized processing of the decision matrix, establishes the project correction factor model, and selects the best project on this basis. The fuzzy clustering algorithm of synergy degree objectively reflects the influence of the index on the ranking evaluation, making the engineering clustering and ranking more reasonable, and the conclusion is more in line with the actual engineering situation.
112
S. Deng and L. Zhang
Acknowledgements. The study was supported by “Research on application of self-compacting concrete mixed with industrial waste residue in structural engineering, China (Grant No.KJQN202005502)” and “Research and Application of Damping and Noise Reducing Road Concrete, China (Grant No. KJQN201805501)”.
References 1. Liu, B., Huo, T., Liao, P.C., Yuan, J., Sun, J., Hu, X.: Special partial least squares (PLS) path decision modeling for bid evaluation of large construction projects. KSCE J. Civ. Eng. 21(3), 1–14 (2017). https://doi.org/10.1007/s12205-016-0702-3 2. Xiao, L., Chen, Z.S., Zhang, X., Chang, J.P., Chin, K.S.: Bid evaluation for major construction projects under large-scale group decision-making environment and characterized expertise levels. Int. J. Comput. Intell. Syst. 32(14), 267–273 (2020) 3. Gajzler, M., Zima, K.: Evaluation of planned construction projects using fuzzy logic. Int. J. Civ. Eng. 15(4), 1–12 (2017). https://doi.org/10.1007/s40999-017-0177-8 4. Seresht, N.G., Lourenzutti, R., Fayek, A.R.: A fuzzy clustering algorithm for developing predictive models in construction applications. Appl. Soft Comput. 29(3), 341–359 (2020) 5. Tran, D.H., Cheng, M.Y., Pham, A.D.: Using fuzzy clustering chaotic-based differential evolution to solve multiple resources leveling in the multiple projects scheduling problem. Alex. Eng. J. 55(2), 1541–1552 (2016) 6. Papsdorf, K., Sima, S., Richter, G., Richter, K.: Construction and evaluation of yeast expression networks by database-guided predictions. Microbial Cell 3(6), 236–247 (2016) 7. Shi, Z., Wu, D., Guo, C., Zhao, C., Cui, Y., Wang, F.Y.: FCM-RDPA: TSK fuzzy regression model construction using fuzzy C-means clustering, regularization, droprule, and powerball adabelief. 12(1), 244–251 (2020) 8. Karunambigai, M.G., Akram, M., Sivasankar, S., Palanivel, K.: Clustering algorithm for intuitionistic fuzzy graphs. Int. J. Uncertain. Fuzz. Knowl.-Based Syst. 25(3), 367–383 (2017) 9. Zhang, Y.: Research on decision-making method of bid evaluation for engineering projects based on fuzzy DEA and grey relation. Open Cybern. Syst. J. 9(1), 711–718 (2015) 10. Zheng, L., Ouyang, W.: A model of teacher evaluation system based improved fuzzy clustering algorithm. Revista de la Facultad de Ingenieria 32(7), 461–466 (2017)
Analysis of Fuel Consumption in Urban Road Congestion Based on SPSS Statistical Software Youzhen Lu(B) and Hui Gao School of Business, The Hohai University, Nanjing 211100, China
Abstract. At present, the rapid urbanization and traffic motorization in China aggravate the problem of urban road congestion. When the road is congested, vehicles will have a variety of driving cycles, which will cause some extra fuel consumption. The extra fuel consumption increases the fuel cost of urban transportation. This paper analyzes the main factors that affect the fuel consumption from the driving characteristics; By collecting relevant data and using SPSS statistical analysis software, the fuel consumption models under uniform speed, deceleration and acceleration conditions are established; According to the relevant data, the time scale model of deceleration and acceleration and idle time scale model are established; Based on the above two models, the fuel consumption model of urban road congestion is established; The representative value of road congestion state is substituted into the fuel consumption model of road congestion to compare and analyze the fuel consumption of free, mild, moderate and serious congestion. Keywords: Urban road congestion · Fuel consumption · Driving conditions · Influencing factors · Regression analysis
1 Introduction Since the 1980s, with the rapid development of China’s cities, the acceleration of urban motorization has brought more and more serious road congestion problems. At the same time, it has exacerbated the consumption of motor vehicle energy, thus increasing the fuel cost of urban transportation. By the end of 2019, China’s civil car ownership has reached 261.5 million, and China’s private car ownership has reached 226.35 million, increasing the pressure on the road system [1]. At the same time, the proportion of transportation land in China is relatively low. Relevant data show that by 2019, the supply of urban transportation land in China will account for 25.3% [2]. The proportion of urban traffic land in western developed countries is very large, the average is about 30% in general cities, and 40–50% in some cities. Secondly, the proportion of energy consumption of transportation system in the total energy consumption is gradually increasing, and its proportion has been more than 30% [3]. Due to the increasing proportion of automobile fuel consumption in China’s oil consumption, China’s independent exploitation of oil has been unable to meet the huge energy demand, only by increasing imports to make up for the huge energy gap. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 113–123, 2022. https://doi.org/10.1007/978-981-16-5857-0_15
114
Y. Lu and H. Gao
2 Literature Review In 2010, Zaabar and Imen of Michigan State University verified and evaluated the fuel consumption models of World Bank HDM and ARFCOM in Australia, specifically analyzed the applicability of empirical formula and mechanical formula, calibrated and confirmed the candidate models of the influence of road conditions on vehicle fuel consumption [4]. In 2012, Delgado Neira et al. Studied the empirical data model of fuel consumption of large and medium-sized vehicles under new or invisible driving conditions according to the known chassis dynamometer and the data in use [5]. In 2019, Fiori et al. Analyzed the energy consumption of different vehicle trajectories under congestion and free flow conditions using real and simulated data. The results show that the relationship between traffic congestion and energy consumption changes with the increase of average traffic speed [6]. In 2020, Alessandra Boggio-Marzet and others selected several routes in Madrid, Spain, and divided them into 13 different sections according to their traffic conditions and geometric shapes. 11 drivers were selected to drive cars and diesel vehicles, and some parameters were recorded. By using the method of cluster analysis, five parameters were selected to define the traffic condition of each single road section. Through comparative analysis, it is concluded that traffic conditions have a significant impact on vehicle fuel efficiency on large capacity roads. On the small capacity road, the geometric shape and linearity of the road are the primary factors affecting the fuel consumption [7].. Based on the fuel consumption test data collected by on-board fuel consumption test system in 2013, Sufen Yuan and others obtained several driving conditions of the test section by using multivariate statistical methods such as principal component analysis and cluster analysis. All kinds of working conditions are introduced into GT-DRIVE, and the fuel economy model of the whole vehicle is established. The fuel economy of the vehicle is calculated through simulation analysis. The fuel consumption of the composite working condition and the cycle working condition is compared and analyzed. It is concluded that the simulation result of the composite working condition can best reflect the fuel consumption of the actual vehicle [8]. Ping Jiang predicted the fuel consumption from the characteristic parameters of driving conditions in 2014, divided the data of fuel consumption and driving conditions collected on typical roads into a large number of driving segments, obtained the principal component by principal component analysis, and predicted the fuel consumption by using the score of the principal component by BP neural network, Finally, the fuel consumption prediction model with high prediction accuracy is obtained [9]. Based on the real experimental data in 2017, Ran Pang analyzed the fuel consumption of typical road sections, summarized the driving characteristics of different road sections, and analyzed the reasons for the differences in fuel consumption of each road section from the perspective of road congestion and road slope. Using regression analysis and other methods. The mathematical model of fuel consumption under the conditions of accelerating uphill, accelerating flat road, decelerating downhill, ascending steep slope and accelerating downhill is established [10].
Analysis of Fuel Consumption in Urban Road Congestion
115
3 Factors Affecting Fuel Consumption 3.1 Definition of Road Congestion Urban road congestion usually has the characteristics of time and space, which usually occurs in a relative time and a relative space, and will block the normal operation of urban traffic in the process of transportation. From the perspective of traffic flow, travelers and economics, urban road congestion has different definitions. This paper mainly analyzes the definition of urban road congestion from the perspective of traffic flow. Traffic flow mainly includes people flow and vehicle flow. This paper mainly studies vehicle flow. The parameters describing traffic flow include traffic volume, average speed, traffic density, queue length, etc. Road congestion is the phenomenon that the speed of vehicles and the volume of traffic increase in an opposite direction. Speed and density are the most basic characteristic values to describe traffic flow, as shown in Fig. 1. In the initial stage, the traffic flow is 0 and the speed is the maximum V a . With the increase of traffic density, the influence between vehicles increases, which slows down the speed and makes the traffic flow reach the maximum value Q. if the vehicles continue to increase, the speed and traffic flow will decrease synchronously, thus reducing the use efficiency of the road. If the number of vehicles continues to increase, the traffic flow will be reduced to zero due to road congestion, and the road cannot play its role normally [11].
Fig. 1. Speed--flow diagram
In this paper, the average speed is selected as the standard to judge whether the traffic is congested or not, and the intersection and straight road are unified as a whole for analysis. The traffic congestion criteria are shown in Table 1 [12].
116
Y. Lu and H. Gao Table 1. Average travel speed in different traffic status Traffic status
Average travel speed (km/h)
free
40–55
Mild congestion
30–40
Moderate congestion 20–30 Serious congestion
≤20
3.2 Influencing Factors of Fuel Consumption
Fig. 2. Fuel consumption per kilometer at a uniform speed
This paper focuses on the effects of speed and traffic environment on fuel consumption. As can be seen from Fig. 2, the relationship between vehicle speed and fuel consumption is quadratic in the curve of fuel consumption per kilometer of uniform speed driving, and the fuel consumption per kilometer is the smallest when the vehicle speed is in the middle of a certain value, this particular speed is the economical speed of the car. Fuel consumption is high at both low and high speeds. The economy speed of small cars is about 30 km/h and that of large cars is about 50 km/h [13].
4 Fuel Consumption Model of Urban Road Congestion 4.1 Assumption of Driving Cycles Vehicle driving can be divided into four driving cycles: uniform speed, acceleration, deceleration and idle speed [14]. When the car runs at a uniform speed, the engine is stable and the additional resistance produced by the rotating parts is small. Thus, the transmission efficiency is high and the driving resistance is small. Therefore, the uniform speed fuel consumption has no more extra fuel consumption. This paper will ignore the influence of this part of extra fuel consumption. In the process of acceleration, the output power of the engine will increase, and in the process of deceleration, part of the energy will be wasted through the engine and braking system, which will increase the actual fuel consumption of the vehicle.
Analysis of Fuel Consumption in Urban Road Congestion
117
In the process of acceleration and deceleration, the speed change and initial speed will affect the added value of fuel consumption. The greater the speed change is, the longer the vehicle delay time is, the higher the power consumed in the process of acceleration and deceleration is.
Fig. 3. Extra fuel consumption per kilometer in decelaerated and accelerated cycle
The extra fuel consumption per kilometer of small vehicles in the state of deceleration and acceleration is shown in Fig. 3 [15]. Each curve in the figure represents the extra fuel consumption per kilometer after decelerating from a certain initial speed to any speed and finally accelerating to the initial speed. The extra fuel consumption per kilometer is 17 ml when decelerating from 60 km/h to 0 km/h and accelerating to 60 km/h. Idling is an operation method often used by drivers in the actual driving process. For example, when the driver finds that there is a traffic jam in front or a red light will appear at the intersection in front, he will choose to put the car into neutral and enter the queue waiting state. The fuel consumption of different types of vehicles is related to their own physical properties [16]. These driving cycles can be reflected by parameters commonly known as driving eigenvalues. The specific standards selected in this paper are as follows [17] (Table 2): Table 2. Driving characteristic values under typical driving cycles Driving cycles Characteristic values Uniform
|a| ≤ 0.1 m/s2 v = 0
Acceleration
a > 0.1 m/s2
Deceleration
a < −0.1 m/s2
idle
V =0
4.2 Fuel Consumption Model Under Various Cycles 4.2.1 Experimental Methods The engine is 1.5L 110HP L4, the intake mode is natural intake, the transmission mode is automatic, and the fuel is gasoline. The fuel consumption measuring tool is EP-2140, which can be obtained by a control panel.
118
Y. Lu and H. Gao
In the suburb of Nanjing, a section of asphalt road with longitudinal slope of 3% without intersection is selected. The length is 4 km, the shape is linear and the two ends are easy to turn around. The weather condition is sunny, the temperature is 22 °C, and the relative humidity is 62%. The test method of fuel consumption per kilometer at constant speed is the road cycle test method. Starting from 10 km/h, increase 5 km/h to 90 km/h each time, and select ten speed points. If the error of the four groups of data is less than 5%, the average value of the four groups of data is taken as the fuel consumption of one uniform speed. The measurement method of extra fuel consumption under acceleration and deceleration: starting from the starting point, the initial speed is 50 km/h, 40 km/h, 30 km/h and 20 km/h respectively, the vehicle first decelerates to a certain speed, and then accelerates to the initial speed. When it reaches the initial speed, the vehicle just reaches the terminal point. Four groups of data are recorded for each initial speed, and the error between these data is controlled within 5%. The average value of four groups is taken and compared with the fuel consumption at each initial speed to calculate the extra fuel consumption. 4.2.2 Fuel Consumption Model Under Uniform Speed Cycle
Fig. 4. Fitting curve of speed-fuel consumption per kilometer
Figure 4 shows the curve of the fuel consumption per kilometer calculated by SPSS software when driving at different speeds and at a uniform speed. According to the curve, the regression model of the quadratic curve is obtained as follows: C = 188.56 − 3.57 × V + 0.032 × V 2
(1)
Where C is the fuel consumption per kilometer (ml/km); V is the speed (km/h) at constant speed. The correlation coefficient of the model was 0.991, close to 1, and the significance level was 0.004, close to 0.005, which indicated that the regression model fit better. 4.2.3 Fuel Consumption Model of Deceleration and Acceleration Cycle Figure 5 shows the variation trend of the additional fuel loss per kilometer with the speed when the initial speed is 50 km/h, 40 km/h, 30 km/h and 20 km/h respectively. This paper considers that their relationship is linear. According to the curve estimation, the following linear regression model is obtained. C = β1 + β2 × v
(2)
Analysis of Fuel Consumption in Urban Road Congestion
119
Fig. 5. Speed-extra fuel consumption curve
Where C is the additional fuel loss per kilometer of the vehicle (ml/km); v is the initial speed (km/h); β1 , β2 is the regression parameter. Table 3. Significance and correlation test Initial speed (km/h) Correlation Significance level β1
β2
Regression model
50
0.989
0
15.91 −0.313 C = 15.91–0.313 v
40
0.998
0
13.2
−0.325 C = 13.2–0.325 v
30
0.992
0.004
8.3
−0.27
C = 8.3–0.27 v
20
1
0
5
−0.25
C = 5–0.25 v
Table 3 shows the correlation and significance test of the model. The correlation coefficient is close to 1, and the significance level is less than 0.1. The regression model is ideal. 4.2.4 Fuel Consumption at Idle Speed When the vehicle is running at idle speed, its driving speed is small and the driving distance is short, so it is not easy to measure. Therefore, the fuel consumption DC (ml/s) per unit time is commonly used to measure the fuel consumption at idle speed. According to the relevant data [15], the fuel consumption per unit time at idle speed is selected as 0.25 ml/s in this paper.
120
Y. Lu and H. Gao
4.3 Time Proportion Model of Different Driving Cycles Under Different Congestion Conditions Because the experimental conditions are limited, this paper obtains the time proportion of uniform speed, decelerating, accelerating and idle speed under different average speed by consulting relevant data [18]. SPSS statistical software was used to analyze the relationship between average speed and deceleration time ratio, average speed and idle time ratio, and the corresponding time ratio model was established. P1 = 1 − P2 − P3 P2 = 20.48 + 2.624u − 0.038u2 P3 = 110.696 − 5.265u + 0.064u2
(3)
Where: P1 , P2 , P3 represents the proportion of uniform speed, deceleration, acceleration and idle speed to the total driving time; u is average speed. The correlation coefficient and significance level of the above models were R2 = 0.775, Sig2 = 0.121, R3 = 0.978 Sig3 = 0, the correlation coefficient is close to 1, and the significance level is less than 0.5, indicating that the regression model is ideal. From Eq. (4), the time T per 100 km can be obtained. T = 100 ÷ u
(4)
According to the proportion of travel time at uniform speed, the distance L 1 at uniform speed is obtained, and the speed of idle speed is 0km/h, so the idle journey L3 is 0km. The difference between the total distance of 100 km and the distance of uniform speed is the distance of L2 . L1 = P1 × T × u L2 = 100 − L1
(5)
Based on the above speed fuel consumption model and time proportion model of each driving cycle, the fuel consumption model of road congestion is obtained from the angle of different average speed. L1 × C + L2 × (C + C) TC = × 10−3 (6) +3600 × T × P3 × DC Where: TC is total fuel consumption at a certain average speed (L/100km); C fuel consumption per kilometer at a certain average speed (ml/km); C extra fuel consumption per kilometer under deceleration and acceleration conditions at a certain average speed (ml/km). From the angle of different average speed, the extra fuel consumption model of road congestion can be obtained as follows: TC = TC − TC0
(7)
Where TC is extra fuel consumption per 100 km (L/100 km) at an average speed; TC is extra fuel consumption per 100 km (L/100 km) in one congested condition; TC 0 is fuel consumption per 100 km (L/100 km) under free condition.
Analysis of Fuel Consumption in Urban Road Congestion
121
5 Empirical Analysis For the purpose of quantitative analysis, the representative values of speed under free, mild, moderate and serious conditions are as follows: 50 km/h, 40 km/h, 30 km/h and 20 km/h respectively; The representative values of speed in different congestion states are substituted into formulas (3), (4) and (5) respectively to get the time proportion and route of each driving condition in different congestion status. From Table 4, the time proportion of deceleration and acceleration reaches the highest in moderate condition, idle speed reaches the highest in serious congestion. Table 4. Time ratio of different driving conditions under different congestion status Congestion status
u(km/h)
T(h)
P1(%)
P2(%)
P3(%)
L1(km)
L2(km)
L3(km)
Free
50
2
35.87
56.68
7.446
35.87
64.13
0
Mild
40
2.5
32.86
64.64
2.496
32.86
67.14
0
Moderate
30
3.33
24.65
65
10.346
24.65
75.35
0
Serious
20
5
11.24
57.76
30.996
11.24
88.76
0
5.1 Calculation Results and Analysis Assuming that the minimum speed under each deceleration and acceleration is 0 km/h, the data from the above table are substituted into formulas (6) and (7) to obtain the total fuel consumption and extra fuel consumption under different congestion status respectively. Table 5. Total fuel consumption and extra fuel consumption under different congestion status Congestion status TC (L) Free
TC ROI (%) (L)
10.15 0.00
0.00
Mild
10.63 0.48
4.76
Moderate
11.94 1.79
17.62
Serious
14.73 4.58
45.09
ROI: Rate of increase in fuel consumption
From Table 5, it can be concluded that the fuel consumption per hundred kilometers of urban road increases with the increase of road congestion, and the fuel consumption caused by serious congestion is the largest, which increases greatly compared with that in moderate congestion. This is mainly due to the serious congestion, the proportion of
122
Y. Lu and H. Gao
vehicle decelerating and accelerating cycles increases greatly, and the extra fuel consumption caused by vehicle decelerating and accelerating is higher than that under other driving cycles.
6 Conclusion This paper analyzes the research status of vehicle fuel consumption at home and abroad, analyzes the influencing factors of fuel consumption under urban road congestion, and establishes the fuel consumption model of urban road congestion by establishing the fuel consumption model under each driving cycle and the time proportion model under each congestion status. Because the variables considered in this paper are limited, and limited by experimental methods and data collection methods, the model and assumptions should be improved in practical application, so as to improve the prediction accuracy of the model and help the government and transportation departments make decisions.
References 1. Statistical bulletin of national economic and social development in 2019. China National Bureau of Statistics (2019) 2. Zhu, P., Li, S., Zhang, L.: Analysis of urban land use in China. Land and Resources Information (2020) 3. Cao, G., Cheng, W., Xiao, H., Dong, Y.: Bicycle lanes and roads planning studies in small and medium-sized cities of Yunnan Province. Technol. Econ. Areas Commun. 04, 53–55 (2010) 4. Zaabar, I.: Effect of pavement condition on vehicle operating costs including fuel consumption, vehicle durability and damage to transported goods. Michigan State University (2010) 5. Delgado-Neira, O.F.: Driving cycle properties and their influence on fuel consumption and emissions. West Virginia University (2012) 6. Fiori, C.: The effect of electrified mobility on the relationship between traffic conditions and energy consumption. Transp. Res. Part D: Transp. Environ. (2019) 7. Boggio-Marzet, A.: Combined influence of traffic conditions, driving behavior, and type of road on fuel consumption. Real driving data from Madrid Area. Int. J. Sustain. Transp. (2021) 8. Pang, R.: The vehicle fuel consumption studies based on actual road working condition. Chongqing Jiaotong University (2017) 9. Jiang, P., Shi, Q.: Vehicle fuel consumption prediction based on driving cycle characteristics. Automot. Eng. 06, 643–647 (2014) 10. Yuan, S.: City driving cycles research and matching optimization of power train system. Wuhan University of Technology (2013) 11. Zhu, M.: Research on socio-economic impact of urban traffic congestion. Beijing Jiaotong University (2013) 12. Li, L.: A study on our urban traffic congestion cost estimates and countermeasures. Dalian Maritime University (2013) 13. Wang, X.: Study on the calculating model of consumption of highway passenger vehicles. Chang’an University (2007) 14. Kong, C.: Research on fuel economy analysis of the used passenger car. Jilin University (2007) 15. Xiang, Q.: Research on vehicle fuel consumption in the urban transportation system. Southeast University (2000)
Analysis of Fuel Consumption in Urban Road Congestion
123
16. Wang, Q.: Light commercial vehicles driving mode of economic research. Chang’an University (2011) 17. Kui, H., Wang, J.: Vehicle fuel consumption model based on urban road operation. J. Jilin Univ. (Eng. Technol. Ed.) (2009) 18. Zhang, K.: Statistical analysis of vehicle driving cycle test in six cities of China. Chin. J. Autom. Eng. (2005)
Relationship Between Adaptability and Career Choice Anxiety of Postgraduates Based on SPSS Data Analysis Xi Yang(B) and Youran Li Capital University of Economics and Business, Beijing, China
Abstract. In order to explore the relationship between the adaptability of postgraduates and career choice anxiety, this paper used SPSS software (Statistical Product and Service Solutions) as a medium, and adopted a random sampling method to conduct a survey of the Chinese university student adaptation scale and the career choice anxiety questionnaire of 569 postgraduates. Then, it organized and analyzed the obtained data in SPSS software. The results show that: (1) The overall level and various dimensions of career choice anxiety of postgraduates are in the middle level. (2) The scores of postgraduates who were not fresh graduates when they applied for a master’s degree are significantly higher than fresh graduates in the four dimensions: Worry about employment prospects, lack of employment support, lack of self-confidence, and employment competition pressure. (3) The scores of postgraduates who haven’t undertaken social work during the school period on the dimension of insufficient self-confidence are significantly higher than those of postgraduates who have undertaken social work. (4) The scores of postgraduates who have not participated in scientific research projects at school in the dimension of lack of career support are significantly higher than those of graduate students who have participated in scientific research projects. (5) There is a significant negative correlation between various dimensions of adaptability and various factors of career choice anxiety. (6) Satisfaction and career choice adjustment have a significant predictive effect on various factors of career anxiety. Among them, satisfaction has a greater predictive effect on employment prospects. Keywords: SPSS · Postgraduates · Adaptability · Career choice anxiety
1 Introduction Career choice anxiety is a kind of nervous, uneasy, strong and lasting emotional experience that individuals (especially college students who are first employed) face when facing career choices, and cause corresponding physiological and behavioral changes [1]. In recent years, with the continuous expansion of the scale of postgraduate education in China, the number of postgraduates has increased significantly, but the increase in the number of jobs that society can provide is not obvious. The employment market is © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 124–134, 2022. https://doi.org/10.1007/978-981-16-5857-0_16
Relationship Between Adaptability and Career Choice Anxiety
125
generally in the “upside down” phenomenon of “high education and low employment”. Coupled with the current spread of the global new crown epidemic and the intensification of trade conflicts, employment prospects have become more uncertain. Under strong social pressure, postgraduates’ career choice anxiety is serious, which not only affects the postgraduates’ employment, but also affects their physical and mental health [2, 3]. Previous studies on college students’ career choice anxiety are mostly based on college graduates and vocational school graduates. There are few studies on master graduates and postgraduates at school. What’s more, most of the research explores college students’ career choice anxiety from the perspective of demographic variables. The variables involved in the research are not comprehensive, and the “social adaptation”, a variable that is likely to affect college students’ career anxiety, is not comprehensive. This study took master’s students at school and graduates as the research objects, used the adaptability of postgraduates as an independent variable to explore its influence on career choice anxiety. In the process of experimental design, considering that statistics have to deal with a large amount of data, complicated calculations and graphing were involved. Modern data analysis work can hardly be carried out normally if it leaves the statistical software. After accurately understanding and mastering the principles of various statistical methods, it is very necessary to master the actual operation of several statistical analysis software. Therefore, this experiment chose SPSS as a software tool for the experiment. SPSS, with its friendly interface, powerful functions, easy to learn and easy to use, contains almost all cutting-edge statistical analysis methods, complete data definition, operation management, open data interface, and flexible and beautiful statistical chart production. Therefore, based on the SPSS, this paper collected statistics, sorted out and analyzed the collected data, in order to put forward reasonable suggestions for the development of postgraduates.
2 Research Method 2.1 Subjects In this study, the questionnaires were completed by random distribution of postgraduates from 5 universities in Beijing. A total of 569 questionnaires were distributed and a total of 569 valid questionnaires were returned. Among them, there were 231 boys and 338 girls. 2.2 Measuring Tool This study used the Chinese College Student Adaptation Scale and the Graduate Career Choice Anxiety Questionnaire to measure postgraduates. The Chinese College Student Adaptation Scale was compiled by Fang Xiaoyi and others. It has 60 items and is divided into 7 dimensions, namely, interpersonal relationship adjustment, learning adjustment, campus life adjustment, career choice adjustment, emotional adjustment, self-adjustment and satisfaction. The scale uses 5 levels of scoring, from 1 “disagree” to 5 “agree”, and is sorted and analyzed in the SPSS software. In the statistical analysis, the reverse questions are recoded. The higher the score, the better the adaptation to the status quo.
126
X. Yang and Y. Li
The Graduate Student Career Choice Anxiety Questionnaire was compiled by Zhang Yuzhu and others. It has 26 items and is divided into 4 dimensions, namely, employment competition pressure, lack of employment support, lack of self-confidence, and worry about employment prospects. The scale uses a 4-level score, starting from 1 “not at all” to 4 “very obvious”. The above scales all have good reliability and validity. 2.3 Procedure and Data Analysis All data were processed with SPSS15.0 statistical software.
3 Research Results 3.1 The Anxiety Status of Postgraduates’ Career Choice The Graduate Career Choice Anxiety Questionnaire uses 4 levels of scoring. From the SPSS processing results (see Table 1), it can be seen that the score for each question is about 1.99 points. It can be seen that the overall score of the postgraduates’ career choice anxiety and the specific scores of each dimension are in the middle level. Table 1. The anxiety status of postgraduates’ career choice (x ± s) Min
Max
Mean
Worry about employment prospects
5
20
10.74
Std.Deviation 2.86
Lack of employment support
8
32
16.27
4.51
Lack of self-confidence
6
24
11.73
3.39
Employment competition pressure
7
28
13.24
3.75
26
104
52.00
12.98
Total score
3.2 Gender Differences in Career Choice Anxiety of Postgraduates The independent sample t-test was conducted on the gender difference in career choice anxiety of postgraduates in SPSS. The results are shown in Table 2. From Table 2, it can be seen that girls of the score in worry about employment prospects, lack of employment support, lack of self-confidence, and employment competition pressure was significantly higher than that of boys (P < 0.05). 3.3 Major Differences of Postgraduates’ Career Choice Anxiety After the data was sorted and analyzed by SPSS software, the results show that postgraduates majoring in liberal arts have significantly higher scores on the dimensions of worry about employment prospects, lack of employment support, lack of self-confidence, and pressure on employment competition than postgraduates majoring in science and engineering (Table 3).
Relationship Between Adaptability and Career Choice Anxiety
127
Table 2. Gender differences in career choice anxiety of postgraduates (x ± s) Male
Female
t
Worry about employment prospects
9.90 ± 2.99
10.88 ± 2.63
−4.480**
Lack of employment support
15.68 ± 4.27
16.27 ± 4.27
−1.713
Lack of self-confidence
10.86 ± 3.47
11.82 ± 3.19
−3.705**
Employment competition pressure
12.59 ± 3.61
13.13 ± 3.52
−1.916
Note: ** At the 0.01 level (two-tailed), the correlation is significant. Table 3. Major differences of postgraduates’ career choice anxiety (x ± s) Liberal arts
Science & Engineering
t
Worry about employment prospects
10.81 ± 2.70
10.18 ± 2.77
2.813**
Lack of employment support
16.39 ± 4.28
15.05 ± 4.07
3.867**
Lack of self-confidence
11.78 ± 3.23
11.00 ± 3.33
2.940**
Employment competition pressure
13.20 ± 3.47
12.23 ± 3.77
3.346**
Note: ** At the 0.01 level (two-tailed), the correlation is significant.
3.4 Grade Differences of Postgraduates’ Career Choice Anxiety Relying on SPSS software, the independent sample t-test was conducted on the grade differences of career choice anxiety of postgraduates. The results show that the scores of non-graduate postgraduates on the three dimensions of worry about employment prospects, lack of employment support, and employment competition pressure are significantly higher than those of the graduate grade (Table 4). Table 4. Major differences of postgraduates’ career choice anxiety (x ± s) Non-graduate grade
Graduate grade
t
Worry about employment prospects
10.77 ± 2.73
10.13 ± 2.70
2.112**
Lack of employment support
16.20 ± 4.45
15.23 ± 3.49
0.682**
Lack of self-confidence
11.72 ± 3.25
11.35 ± 3.27
0.682
Employment competition pressure
13.18 ± 3.59
12.44 ± 3.28
3.548*
Note: ** At the 0.01 level (two-tailed), the correlation is significant;*At the 0.05 level (two-tailed), the correlation is significant.
128
X. Yang and Y. Li
3.5 The Influence of Whether They Were Fresh-Graduates When Applying for a master’s Degree on Career Choice Anxiety After the data was sorted and analyzed by SPSS software, the results show that when applying for a master’s degree, students who are previous graduates have significantly higher scores on the dimensions of employment prospects, lack of employment support, lack of self-confidence, and employment competition pressure than that of fresh graduates (Table 5). Table 5. The influence of whether they were fresh-graduates when applying for a master’s degree on career choice anxiety (x ± s) Fresh-graduates
Previous graduates
t
Worry about employment prospects
10.58 ± 2.73
10.95 ± 2.70
−2.083*
Lack of employment support
15.86 ± 4.32
16.77 ± 4.12
−3.257**
Lack of self-confidence
11.49 ± 3.22
11.97 ± 3.30
−2.244*
Employment competition pressure
12.78 ± 3.52
13.54 ± 3.54
−3.280**
Note: **At the 0.01 level (two-tailed), the correlation is significant;*At the 0.05 level (two-tailed), the correlation is significant.
3.6 The Influence of Whether the Major of master’s Degree is the First Choice on Career Choice Anxiety After the data was sorted and analyzed by SPSS software, the results show that the scores of postgraduates whose majors are not the first choice in terms of employment prospects and lack of self-confidence are significantly higher than those of students whose major is the first choice (Table 6). Table 6. The influence of whether the major of a master’s degree is the first choice on career choice anxiety (x ± s) First choice
Non-first choice
t
Worry about employment prospects
10.62 ± 2.71
12.05 ± 2.61
−4.075**
Lack of employment support
16.20 ± 4.25
15.76 ± 4.61
0.771
Lack of self-confidence
11.69 ± 3.31
11.00 ± 2.24
1.629*
Employment competition pressure
13.06 ± 3.56
12.67 ± 3.25
0.849
Note: **At the 0.01 level (two-tailed), the correlation is significant;*At the 0.05 level (two-tailed), the correlation is significant.
Relationship Between Adaptability and Career Choice Anxiety
129
3.7 The Influence of Whether Postgraduates Undertake Social Work During Their Studies on Career Choice Anxiety After the data was sorted and analyzed by SPSS software, the results show that the career choice anxiety of postgraduates who have not undertaken social work during the school period is significantly higher than that of graduate students who have undertaken social work on the dimension of insufficient self-confidence (Table 7). Table 7. The influence of whether postgraduates undertake social work during their studies on career choice anxiety (x ± s) Yes
No
t
Worry about employment prospects
10.78 ± 2.73
10.64 ± 2.72
0.798
Lack of employment support
15.98 ± 4.12
16.32 ± 4.40
−1.274
Lack of self-confidence
11.44 ± 3.27
11.82 ± 3.24
−1.860*
Employment competition pressure
12.94 ± 3.59
13.10 ± 3.51
−0.756
Note: * At the 0.05 level (two-tailed), the correlation is significant.
3.8 The Influence of Whether Graduate Students Have Participated in Scientific Research Projects During Their Studies on Career Choice Anxiety After the data was sorted and analyzed by SPSS software, the results show that the career choice anxiety of postgraduates who have not participated in scientific research projects during the school period is significantly higher than that of graduate students who have participated in scientific research projects on the dimension of lack of employment support (Table 8). Table 8. The influence of whether graduate students have participated in scientific research projects during their studies on career choice anxiety (x ± s) Yes
No
t
Worry about employment prospects
10.69 ± 2.73
10.72 ± 2.72
−0.204
Lack of employment support
15.85 ± 4.13
16.50 ± 4.40
−2.462**
Lack of self-confidence
11.50 ± 3.18
11.80 ± 3.34
−1.546
Employment competition pressure
13.04 ± 3.49
13.03 ± 3.60
0.043
Note: ** At the 0.01 level (two-tailed), the correlation is significant.
130
X. Yang and Y. Li
3.9 A Correlation Analysis of the Adaptability of Postgraduates and Career Choice Anxiety In order to investigate the influence of the adaptability of postgraduates on their career choice anxiety, the scores of the 7 dimensions of adaptability and the 4 dimensions of career anxiety were analyzed in SPSS, as shown in Table 9. Table 9. A correlation analysis of the adaptability of postgraduates and career choice anxiety (r) Worry about employment prospects
Worry about employment prospects
Lack of self-confidence
Employment competition pressure
Interpersonal relationship adjustment
−0.197**
−0.208**
−0.315**
−0.268**
learning adjustment
−0.166**
−0.194**
−0.241**
−0.254**
Campus life adjustment
−0.229**
−0.151**
−0.250**
−0.181**
Career choice adjustment
−0.267**
−0.255**
−0.372**
−0.341**
Emotional adjustment
−0.267**
−0.173**
−0.292**
−0.277**
Self-adjustment
−0.091**
−0.186**
−0.253**
−0.196**
Satisfaction
−0.384**
−0.307**
−0.400**
−0.347**
Note: ** At the 0.01 level (two-tailed), the correlation is significant.
After the data was sorted and analyzed by SPSS software, the results show that there is a significant negative correlation between the adaptability of postgraduates and the various factors of career choice anxiety, indicating that the level of career choice anxiety of postgraduates with strong adaptability in all aspects is relatively low. 3.10 Regression Analysis on the Adaptability of Postgraduates and Career Choice Anxiety In order to further investigate the impact of postgraduates’ adaptability on career choice anxiety, the four dimensions of career anxiety were used as dependent variables and the seven dimensions of adaptability were used as independent variables. In SPSS software, the stepwise regression method was used for multiple regression analysis. The analysis results are shown in Table 10. After the data was sorted and analyzed by SPSS software, the results show that with employment prospects as the dependent variable, satisfaction, self-adjustment, career choice adaptation, and campus life adaptation are included in the regression equation.
Relationship Between Adaptability and Career Choice Anxiety
131
Table 10. Regression analysis on career choice anxiety Dependent variables
Entering variables
R2
Beta
t
Worry about employment prospects
Satisfaction
0.184
−0.328
−11.899**
−0.177
−6.486**
0.126
4.551**
−0.099
−3.668**
−0.249
−9.319**
−0.168
−6.308**
−0.250
−8.705**
Career choice adjustment
−0.237
−9.129**
Interpersonal relationship adjustment
−0.082
−2.925**
Campus life adjustment
−0.067
−2.592**
−0.230
−8.101**
Career choice adjustment
−0.239
−9.148**
Learning adjustment
−0.069
−2.480**
Self-adjustment Career choice adjustment Campus life adjustment
Lack of employment support
Satisfaction
Lack of self-confidence
Satisfaction
Employment competition pressure
0.119
Career choice adjustment
Satisfaction
0.230
0.179
Note: **At the 0.01 level (two-tailed), the correlation is significant
The total prediction coefficient was 18.4%, of which satisfaction had the greatest effect on prediction. Taking the lack of employment support as the dependent variable, the two dimensions of satisfaction and career adaptation were entered into the regression equation, with a total prediction coefficient of 11.9%, of which satisfaction was the largest predictor. Taking the lack of self-confidence as the dependent variable, the four dimensions of satisfaction, career choice adaptation, interpersonal relationship adaptation and campus life adaptation were entered into the regression equation. The total predictive coefficient was 23%, of which satisfaction predicted the most. Taking the pressure of employment competition as the dependent variable, the three dimensions of satisfaction, career adaptation and learning adaptation were entered the regression equation. The regression equation was significant, with a total prediction coefficient of 17.9% .
4 Conclusion and Suggestions 4.1 Conclusion First, the results of the study show that postgraduates’ career choice anxiety is at a moderate level. Career choice anxiety is a very common psychological problem encountered by postgraduates in the process of employment, which undoubtedly hinders the full utilization and ability of high-quality human resources. Previous research results have shown that appropriate anxiety can improve work efficiency and learning efficiency, and
132
X. Yang and Y. Li
the relationship between anxiety and activity efficiency is an inverted u-shaped curve, that is, moderate-intensity anxiety can achieve the best learning efficiency [4], but excessive anxiety It will adversely affect people’s behavior, intelligence, personality, etc., and hinder individual development [5, 6]. From the results of the research, although most postgraduates may feel uneasy, nervous or distressed about certain matters or objects in their lives and work, they can often be relieved through effective emotional counseling, so they can analyze and analyze correctly and calmly when facing the current employment situation, and actively participate in competition. There are also some postgraduates with poor adaptability and high degree of career choice anxiety, which will seriously affect personal physical and mental health in the long run. Therefore, timely and targeted adaptive education and employment guidance for postgraduates are indispensable for colleges and universities. Second, Girls’ scores on four dimensions of worry about employment prospects, lack of employment support, lack of self-confidence, and employment competition pressure were significantly higher than boys’ scores, and liberal arts students’ scores on all dimensions of career anxiety were significantly higher than those of science students. The research results are consistent with previous studies [7]. On this basis, this paper conducted an independent sample t-test on the grade difference of career choice anxiety for postgraduates. The study found that non-graduate postgraduates scored significantly higher than postgraduate who are in the year of graduation in three dimensions: worry about employment prospects, lack of employment support, and employment competition pressure. This may be due to the short academic system of postgraduates, which makes non-graduate students begin to consider employment issues earlier. In addition, they lack professional employment guidance and employment practice experience and have a higher degree of anxiety. The scores of postgraduates who were not fresh graduates when they applied for the master’s degree were significantly higher than those who are non-fresh graduates in the four dimensions of employment prospects, lack of employment support, lack of self-confidence, and employment competition pressure. This may be because compared with fresh graduates, previous students face greater age, employment, and family pressures. Liu Ping’s (2011) research shows that there are obvious age differences in salary levels and job matching for postgraduates of different ages. Young postgraduates have much higher employment access opportunities, interview opportunities, and promotion opportunities than older postgraduates [8]. In addition, with increasing age, family pressures will follow. Former graduates often face many practical challenges during the school period, such as family financial problems, family health problems, and so on. Therefore, under the triple pressure of age, employment, and family, the problem of career anxiety caused by former graduates at the postgraduate level will be more serious. The research in this paper also found that the scores of postgraduates who have not undertaken social work during the school’s lack of self-confidence were significantly higher than those of postgraduates who have undertaken social work. The postgraduates who have not participated in scientific research projects during the school lack employment support. The dimensional scores were significantly higher than those of postgraduates who have participated in scientific research projects. This is because having undertaken social work or participating in scientific research projects during the
Relationship Between Adaptability and Career Choice Anxiety
133
school period can exercise the social practice and communication skills of postgraduates, and can also improve the psychological resistance to stress. This is beyond the reach of professional courses. Therefore, during the school period, postgraduates who have undertaken social work or participated in scientific research projects tend to have stronger self-confidence and can make self-assessment more clearly, which helps reduce anxiety. The last but not the least, there is a significant negative correlation between the adaptability of postgraduates and the level of career choice anxiety. In the process of choosing a job, graduate students with strong adaptability are often able to face the employment competition with a positive attitude, and calmly deal with various problems encountered in job selection, so the level of anxiety is low. Through further regression analysis, this study found that adaptability has a significant impact on the level of career anxiety of graduate students. Self-adjustment and career adaptation have a significant negative predictive effect on career anxiety, that is, the stronger the adaptability, the lower the level of career anxiety of graduate students. In addition, satisfaction, a selfsubjective psychological evaluation index used to measure the quality of an individual’s relationship or state, has the greatest predictive effect on employment prospects. 4.2 Suggestions 4.2.1 Enhance the Psychological Education for Postgraduates in Career Choice Colleges and universities should strengthen the horizontal cooperation between the employment guidance department and the mental health education department. While improving the employment skills of postgraduates and guiding postgraduates to establish correct career values, they should also instruct students to master certain psychological adjustment skills to enhance their ability and psychological quality to withstand setbacks in the process of career choice. More attention should be paid to the psychological education of female graduate students, non-graduate postgraduates, and postgraduates who were not fresh graduates when applying for a master degree, so that these student groups can understand themselves more objectively and maintain a good attitude in the process of career choice. The daily mental health education for postgraduates should be strengthened to fundamentally change the mental health status of postgraduates at this stage, so that postgraduates can respond positively with a better attitude toward career choices when faced with career choices, and prepare well in advance for career choices, so as to smoothly realize the transformation of social roles after employment. 4.2.2 Strengthen Professional Career Education for Postgraduates Colleges and universities should pay attention to the curriculum system construction of postgraduate employment guidance and career education, research and explore some effective and easily accepted career guidance methods for modern postgraduates, guide postgraduates to make objective and reasonable self-evaluation of themselves, and form an effective self-recognition, design rationally and plan your career effectively. In addition, special attention should be paid to improving the skills of postgraduates in interpersonal communication, information exchange, organization and management, application and innovation. It is necessary to combine employment guidance education and career
134
X. Yang and Y. Li
education with professional course theoretical study. This kind of full-course career education system and operation mode that intersects theory and practice should run through the entire postgraduate education stage. Colleges and universities should cooperate with employers to provide social practice opportunities for postgraduates, so that postgraduates can receive more workplace education in a comprehensive and direct manner, which will help them clarify their career positioning. In this way, postgraduates can not only gain relevant professional experience in advance during their stay at school, but also clarify the current economic and social development and new employment situations, and effectively avoid blind employment. 4.2.3 Improve the Adaptability of Postgraduates In the process of choosing a job, postgraduates with strong adaptability are often able to face the employment competition with a positive attitude, and calmly deal with various problems encountered in job selection, so the level of anxiety is low. Therefore, colleges and universities should actively guide postgraduates to participate in social practice activities and scientific research activities, improve their interpersonal skills, learning adaptability, and social practice capabilities, and guide postgraduates to formulate reasonable self-development goals and correct self-awareness. They should give practical objective support and unconditional respect to postgraduates, understand and accept the subjective emotional experience of postgraduates, and guide them to make full use of the social support they receive, thereby reducing the level of their career choice anxiety.
References 1. Zhang, Y.: An experimental study of college graduates’ job-choosing anxiety and its psychological education. Inner Mongolia Normal University (2005) 2. Lin, J.: Research on job-choosing anxiety and related factors of postgraduates in Fuzhou. Fujian Normal University (2010) 3. Dai, F.: Research on the relationship between psychological resilience and career choice anxiety of postgraduates in Southwest University. Southwest University (20130 4. Zhang, Y., Chen, Z.: Experimental research on the effect of psychological education to graduate’s vocational selection anxiety. Psychol. Dev. Educ. 22(3), 99–102 (2006) 5. Zhang, Y., Huang, Z.: Research on job selection anxiety of graduates in colleges and universities. Chin. J. Sch. Doctor 21(3), 358–360 (2007) 6. Ren, W., Li, Q.: Relationships between adaptability and vocational selection anxiety in the college students. Chin. J. Health Psychol. 17(10), 1239–1241 (2009) 7. Wu, M.: Relevant research on employment social support, cognitive evaluation and employment anxiety of college graduates. Guangxi Normal University (2008) 8. Liu, P.: Reflections in the training of medical science degree postgraduates. J. Liaoning Univ. Tradit. Chin. Med. 13(12), 5–6 (2011)
Using the Information Platform System to Simulate the Application of Loco Therapy in the Intervention of Children with Autism Yiming Sun(B) Academy of Music, Anqing Normal University, Anqing, Anhui, China
Abstract. Autism is a developmental disorder in early childhood, involving perception, thinking, behavior and many other aspects. After a child is diagnosed with autism, timely early intervention can better promote the development of children’s abilities. Children with autism usually need lifelong help, and early intervention is an important way to reduce the impact of the disease. Data from relevant hospital information platform systems show that parents’ choices and needs for early intervention can reflect the shortcomings of China’s current early intervention service system. This article analyzes the insufficiency of intervention for children with autism, and simulates the application of music therapy (MT) in the intervention of children with autism with the information platform systems of major hospitals. This article conducted a survey and analysis of 330 people who participated in MT, compared and analyzed the satisfaction, utilization and acceptance of MT in the past three years, and discussed and analyzed the results. Suggestions on the application of MT in the intervention treatment of children with autism are put forward, which provides a guarantee for the development of intervention treatment for children with autism. Research on this issue is important for further development, problem solving, and adaptation of coping strategies for children with autism. Keywords: MT · Children with autism · Intervention therapy · Information platform system
1 Introduction In recent decades, the incidence of autism has increased rapidly. According to data from the information platform systems of many Asian hospitals of Grade A and above, the incidence of autism in Asia is also on the rise. However, there is no clear answer to the cause of autism [1–3]. Therefore, the two core defects of intervention in autism, including limited and repetitive behaviors, interests or activities, as well as social interaction and social interaction defects, have become a hot spot in current research. In the practice of special education schools for many years, researchers have found that the increasingly prominent problem of children with autism is “social interaction”, which is manifested as emotional behavior disorder and inappropriate behavior [4, 5]. If this problem is not © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 135–141, 2022. https://doi.org/10.1007/978-981-16-5857-0_17
136
Y. Sun
resolved, it will be difficult to carry out other rehabilitation education for children with autism. On this basis, this article attempts to explore how to use MT to improve the intervention and treatment effects of autistic children, so that these children can better communicate with others in a comprehensive educational environment and daily life. Objective To explore whether music can find a treatment model for children with autism, and to provide comprehensive research on MT for children with autism [6–8]. In daily life, children with autism have many different symptoms, such as limited social skills, communication skills, and behavioral skills. These symptoms can affect their daily activities. Studies have shown that music is beneficial to children with autism, and music is also believed to improve children’s symptoms [9, 10]. The current literature is more about how music can improve the mental state of children with autism, and pay attention to the benefits of MT for children with autism. Based on the application of MT in the intervention of children with autism, the actual situation of the intervention and treatment of MT on children with autism is analyzed. Analysis shows that the application of MT in the intervention of children with autism is still insufficient. This article uses the information platform systems of major hospitals to simulate the application of MT in the intervention of children with autism. In the study, the application of MT was optimized according to the actual situation during the intervention and treatment of children with autism. An effective combination of the two can improve the quality of care in children with autism. Through research and data analysis on autistic children’s engagement and treatment, it is believed that MT can improve treatments for children with autism, creating treatment plans. more important. and achieve a win-win situation between both teams.
2 Autistic Children and MT 2.1 Autistic Child Autism is a syndrome with serious problems in communication, behavior and social interaction due to abnormal neuropsychological function. They lack emotional response and have consistent and stereotyped repetition. For autistic children, crying, withdrawal, fear, self-mutilation and so on are the emotional behaviors that seriously perplex them. Emotional behavior will not only cause harm to their body and mind, but also bring trouble to teachers’ classroom teaching and management. Therefore, it is necessary to intervene and treat autistic children. It is estimated that there are at least 650000 autistic children in China. Some of them will be disabled for life, almost unable to take care of themselves. Or adult because of behavioral and emotional barriers and violate the law, bring serious negative impact on the family and society. At present, there is no specific drug for the disease, and most of them have some obstacles due to the communication prescription of children. Persuasion and intervention through language and communication is very limited. The study found that these children are very interested in music, and even some of them have a better sense of music and sound discrimination than ordinary people.
Using the Information Platform System to Simulate the Application
137
2.2 MT Famous psychologists have two views on MT: first, MT can be used to intervene in mental illness. Second, use music in treatment or rehabilitation. Under the guidance of therapists with professional training experience, the MT program will provide various listening or participation experiences according to individual needs. So far, MT as a new discipline has not been clearly defined. But music has been a way of expression and communication since ancient times, and treatment is to help and deal with it. Therefore, MT can also be said to be a medium of communication with music to promote individual physical and mental health. In the first definition of MT organization in the United States, the definition of MT is more comprehensive and clearer: MT is to restore, maintain and enhance individual’s physical and mental health through music in the treatment environment, so as to make necessary changes in individual behavior. This change enables individuals to better understand themselves and their living environment after treatment, so as to better adapt to the society.
3 Investigation and Analysis on the Effect of MT in Intervention Treatment of Autistic Children Statistical results show that MT has played an important role in the intervention of autistic children in China. Compared to traditional therapies, therapies have strong therapeutic properties and great effects. Therapeutic interventions can replace traditional therapies in interventions for children with autism, and increase interventions and interventions for children with autism. In the research and analysis, this paper uses two ways to investigate, namely, questionnaire and comparative research. 330 people involved in MT were selected as the survey sample. In the process of investigation, this paper found that both in the medical staff and patients’ families, the role of MT or satisfaction. In this survey, we divided all the subjects into two groups by category. One group was medical staff; the other group was family members of patients. The comprehensive situation of intervention and treatment for children with autism was analyzed. The results are shown in Table 1. According to the research results of the two groups, the satisfaction level of MT in the treatment and care of children with autism reached more than 90%. The development of MT has improved the diversified structure of intervention and treatment of autistic children, and plays an important role in the intervention and treatment of autistic children.
4 Discussion 4.1 Current Situation of Autism Intervention in China Intervention mode of autistic children in China mainly includes institutional intervention service, family intervention service, community intervention service and hospital intervention service. At present, community intervention services are relatively weak, and intervention institutions are the main places for autistic children to receive early intervention services.
138
Y. Sun Table 1. The role of MT in the intervention of autistic children
Investigation items
Medical staff (%)
Patients and their families (%)
It has a promoting effect
93
91
It doesn’t work
6
7
without effect
1
2
Hope to increase the promotion of MT
95
94
(1) Organization type and distribution Domestic intervention institutions can be divided into two types: private intervention institutions and public intervention institutions. The intervention of autistic children in China began in the early 1980s. Due to various reasons, most of the autistic institutions are private, especially the parent centered institutions established by parents of autistic children. Since the eleventh five-year plan, we have set up an autism rehabilitation education institution. At present, 31 provinces and cities across the country have established designated service agencies of the disabled persons’ Federation. (2) Teacher and institution level Effect of early intervention largely depends on the intervention level of teachers and intervention institutions. A large number of autistic children put forward higher requirements for the intervention effect. At present, there is a lack of autism intervention personnel in China. According to relevant reports, the existing autism intervention teachers can only cover 1.5% of the population, the remaining 98.5% of the population cannot get effective intervention training, and there are only 35000 special education professionals. Many autistic intervention personnel lack special education background, and they are mainly preschool teachers, and there are relatively few special education and medical related majors. There is a lack of professional teachers in intervention institutions in remote or poor areas. (3) Institutional intervention costs In recent years, the government has paid more and more attention to the intervention of autism, issued a lot of relevant policies and implemented some relief projects. During the Eleventh Five Year Plan period, the central government set up a special subsidy fund to implement the colorful dream action plan of the national rehabilitation assistance project for disabled children to provide subsidies for the rehabilitation of poor disabled children. According to the document, there are many subsidy and relief policies for the intervention of autistic children. The Ministry of human resources and social security, the health and Family Planning Commission and other departments jointly issued the notice on bringing some new medical rehabilitation projects into the payment scope of basic medical insurance, requiring 22 rehabilitation projects such as “comprehensive rehabilitation assessment” to be included in the payment scope of basic medical insurance, including “autism diagnosis interview” for suspected autistic children under the age of 6, which is autism diagnosis Included in the scope of medical insurance for the first time.
Using the Information Platform System to Simulate the Application
2017
2018
139
2019
100 90 80
Statistical value(%)
70 60 50
40 30 20 10
0 Satisfaction
Utilization rate
Acceptance
Statistical items
Fig. 1. Comparative analysis of satisfaction, utilization rate and acceptance of MT in recent three years
Figure 1 shows a comparative analysis of therapy satisfaction, use, and acceptance over the past three years. It is clear that the 2019 overall data dominates. In addition, this paper further analyzes the role of MT in intervention therapy for children with autism, and the results are shown in Fig. 2. As can be seen in Fig. 2, the effect of therapy is higher than that of therapy. Designs aren’t focused, it’s their shape and performance inconsistent. After disseminating the therapy, the quality of my treatment improved and the value of the treatments also improved. Research and analysis show that the benefits of MT outweigh the disadvantages of treating and caring for children with autism, and promoting MT for better outcomes in a palliative setting and pediatrics. Autism is very important. 4.2 Suggestions on the Application of MT (1) Music therapists should improve their music literacy and master more MT skills Although MT does not emphasize the music quality of music therapists, if music therapists with music background join in, MT can integrate more MT methods and technologies, and many strategies can act on patients according to their wishes; at the same time, MT technology has many schools. If music therapists can integrate the advantages of MT more effectively, MT will play a more important role. (2) MT should be combined with medicine and education After consulting a large number of literatures, we found that there are related researches on the application of MT in education and the application of MT in medicine, and the research results have described their respective effectiveness and advantages, but there is no attempt to combine MT in education and medicine. It is suggested to combine the two methods for experimental study in order to obtain better therapeutic effect.
140
Y. Sun
Traditional model
Music therapy
90
80
Statistical value(%)
70
60 50
40 30
20 10
0 pertinence
Communication
Effectiveness
Statistical items
Fig. 2. Comparative investigation on comprehensive performance of traditional treatment mode and MT
(3) In the study, the larger the sample size, the greater the difference of specific emotional behavior If conditions permit, we can increase the sample size, so that the research results are more convincing. At the same time, the negative emotional behaviors highlighted by the subjects in this study are too single. We can consider using the same MT strategy to intervene different types of bad emotional behavior. (4) Joint research with brother schools to verify social effectiveness Social effectiveness is highly valued in the field of behavior research, which determines the universality of research. Therefore, the researchers suggest that special education schools should be combined to study to prove its effectiveness. Researchers from the provincial education department provide standard MT equipment for provincial schools. In the future, this study intends to contact MT teachers from other special education schools in the province, recommend MT techniques and training procedures in this study, as well as our research procedures, and require them to conduct MT training for autistic children with emotional and behavioral problems in practice, so as to test the social validity of this study.
5 Conclusions In the application research of using the information platform system of major hospitals to simulate MT in the intervention of children with autism, this article focuses on the application of MT in the intervention of children with autism. After research, this article believes that MT is an important part of the intervention and treatment of children with autism. Through the survey and analysis of people participating in MT, their satisfaction with MT has been obtained. In recent years, MT has developed rapidly, overcoming
Using the Information Platform System to Simulate the Application
141
many treatment problems. Now they are at a stage of progress at home and abroad. Through data analysis, therapy improves interventions and treatments for children with autism, improves coordination of treatment, and can increase interventions and treatments for children with autism. According to the results of this study, in order to make full use of MT in the intervention treatment of children with autism, we must combine MT with the actual conditions and pathological characteristics of children with autism, pay attention to scientific introduction and reasonable arrangements, and effectively formulate A reasonable intervention treatment plan for children with autism to ensure the healthy development of intervention treatment for children with autism. This study yielded excellent results and contributed to the application of chemotherapy in the early treatment of children with autism. Acknowledgments. Key project of humanities and social sciences of anhui education department (source) using music game activities to carry out rehabilitation training research on children with autism (Project No: SK2017A0338).
References 1. Paul, S., Ramsey, D.: MT in physical medicine and rehabilitation. Aust. Occup. Ther. J. 47(3), 111–118 (2015) 2. Grasso, M.C., Button, B.M., Allison, D.J., Sawyer, S.M.: Benefits of MT as an adjunct to chest physiotherapy in infants and toddlers with cystic fibrosis. Pediatr. Pulmonol. 29(5), 371–381 (2015) 3. Diamond, J., Cropper, K., Godsal, J.: The useless therapist: MT and dramatherapy with traumatised children. Therapeutic Commun. 37(1), 12–17 (2016) 4. Mastnak, W.: Perinatal MT and antenatal music classes: principles, mechanisms and benefits. J. Perinatal Educ. 25(3), 184–192 (2016) 5. Gómez-Romero, M., Jiménez-Palomares, M., Rodríguez-Mansilla, J., Flores-Nieto, A., Garrido-Ardila, E.M., González-López-Arza, M.V.: Benefits of MT on behaviour disorders in subjects diagnosed with dementia: a systematic review. Neurología, 32(4), 253–263 (2016) 6. Main, P.A.E., Thomas, P., Angley, M.T., Young, R., Esterman, A., King, C.E.: Lack of evidence for genomic instability in autistic children as measured by the cytokinesis - block micronucleus cytome assay. Autism Res. 8(1), 94–104 (2015) 7. Ma, Y., Li, Y., Ma, L., Cao, C., Liu, X.: Anesthesia for stem cell transplantation in autistic children: a prospective, randomized, double-blind comparison of propofol and etomidate following sevoflurane inhalation. Exp. Ther. Med. 9(3), 1035–1039 (2015) 8. Saad, K., Abdel-Rahman, A.A., Elserogy, Y.M., Al-Atram, A.A., Ali, A.M.: Vitamin Dstatus in autism spectrum disorders and the efficacy of vitamin D supplementation in autistic children. Nutri. Neurosci. 19(8), 346–351 (2016) 9. Yun, G., Yasong, D. U., Huilin, L. I., Xiyan, Z., Yu, A.N., Bai-Lin, W. U.: Parenting stress and affective symptoms in parents of autistic children. Sci. China Life Sci. 58(10), 1036–1043 (2015). https://doi.org/10.1007/s11427-012-4293-z 10. Machado, C., Estévez, M., Leisman, G., Melillo, R., Rodríguez, R., Defina, P.: QEEG spectral and coherence assessment of autistic children in three different experimental conditions. J. Autism Dev. Disord. 45(2), 406–424 (2015). https://doi.org/10.1007/s10803-013-1909-5
Design and Implementation of College Student Information Management System Based on Web Yue Yu(B) Liaoning Jianzhu Vocational College, Liaoyang, Liaoning, China
Abstract. With the deepening of China’s higher education reform, the informatization of college student (CS) management is no longer a new thing. At present, in the face of the increasing complexity of higher vocational college management, the traditional way of data statistics and text recording can not meet the needs of the current situation, the introduction of information technology, automation, intelligent scientific management means and methods is the general trend. The student information management not only lays a solid foundation for improving the teaching work, but also provides a good foundation for the future development of students in information security and other aspects. This paper mainly studies the design and implementation of CS information management system (IMS) based on Web. In this paper, the information system structure framework is designed, using the B/S three-tier architecture, the design of user interface layer, business logic layer and data access layer. Then it analyzes the function of the information system, including the corresponding functional requirements of the system administrator, student management personnel and students. This paper also designed the function module, clearly constructed the hierarchical structure of the system and the business function module. In this paper, the cluster analysis method is used to evaluate the students’ performance comprehensively, which provides a reliable basis for the evaluation of students and gives a scientific comprehensive ranking of students. This paper puts the designed IMS into operation test, analyzes the proportion of its module use and explores its loopholes and defects, and shows them intuitively through charts. The experimental results show that in the operation of information management module, the highest proportion is the basic information module, accounting for 23.85%, followed by user management module, accounting for 19.72%, student curriculum module, accounting for 18.79%, student achievement module, accounting for 18.46%, physical health module, accounting for 5.33%, scholarship management module, accounting for 3.47%, tuition module, accounting for 10.38%. Keywords: Student management · Information system · B/S three-layer structure · System design
1 Introduction With the continuous improvement of the level of information technology, the management system of colleges and universities in our country is constantly adjusted and © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 142–150, 2022. https://doi.org/10.1007/978-981-16-5857-0_18
Design and Implementation of College Student Information Management System
143
changed. It is found that there are several problems in the current IMS used in Colleges and universities [1, 2]: the student IMS of colleges and universities adopts distributed computer system, each functional department is decentralized, and each functional department sets up corresponding management procedures to manage their own data, this management mode cannot guarantee the consistency of data, in the case of each special demand, the whole system will not be able to achieve comprehensive maintenance, data sharing and integration cannot be achieved, the management is complex and inefficient [3, 4]; secondly, the existing student IMS in Colleges and universities has imperfect functions in data updating and low degree of management automation, which leads to the inefficiency of student information management [5, 6]; in addition, a considerable part of the work in Colleges and universities has not yet implemented the network and information management, and still uses the traditional manual operation, which brings many difficulties to the educational administration staff, and the work efficiency and effect cannot be guaranteed [7, 8]. To sum up, the application cases and theoretical research of student IMS in Colleges and universities are not deep and rich enough. I hope that through the practice of this paper, we can make new contributions to the theoretical research and application cases of relevant content for colleges and universities, and also provide practical reference for information management, and committed to change the existing IMS of colleges and universities in China many common problems [9, 10]. In the research on the design of university student IMS, many scholars at home and abroad have made some achievements. Alameri and others pointed out that with the continuous expansion of the scale of university development and the continuous promotion of university information development and construction, the development of university student IMS has been urgent. Student IMS is an indispensable management tool of modern education system, its content can play a very far-reaching impact on the specific development of the school [11]. Yang P and others pointed out that the university student IMS has become more and more popular in the University Internet information application level. It is a very classic university Internet IMS, and also a very important part of the current university student management work. Student management is an important part of daily business in Colleges and universities. Through student management, we can grasp students’ ideological trends and provide guarantee for the cultivation of students’ comprehensive quality [12]. This paper mainly studies the design and implementation of CS IMS based on Web. In this paper, the information system structure framework is designed, using the B / S three-tier architecture, the design of user interface layer, business logic layer and data access layer. Then it analyzes the function of the information system, including the corresponding functional requirements of the system administrator, student management personnel and students. This paper also designed the function module, clearly constructed the hierarchical structure of the system and the business function module. In this paper, the cluster analysis method is used to evaluate the students’ performance comprehensively, which provides a reliable basis for the evaluation of students and gives a scientific comprehensive ranking of students. This paper uses chart analysis method to put the designed IMS into operation, test data intuitive performance, analyze the proportion of its module use and explore its loopholes and defects.
144
Y. Yu
2 The IMS of CS 2.1 Information System Architecture Design The three-tier architecture of B/S is applied in the system. The top layer is mainly used to complete the process of user interaction, that is, the client based on Web browser. The main functions include complete rule verification, information description and echo. The second is the business logic layer. The main function of this layer is to realize all kinds of business logic, such as data transfer query, information management, etc. The third layer is the data access layer. The function of this layer is to operate the database directly, and to add, delete, modify, search and update the data. (1) User interface layer The user interface layer is mainly to show the user the interface of the system, is the interface between the information system and the user, and can interact with each other. In this layer, users mainly access the system through the web browser, because the user interface directly affects the user’s first feeling of the system, so the design page should be beautiful, friendly and user-friendly, easy to operate, simple and generous interface. Because users with different identities log in to the system to display landing pages with different identities, the design must be based on the function design and demand research in the early stage of the system. The user roles of the system are mainly students, teaching staff, including teaching assistant, grade director, system manager, etc. different user types correspond to different roles, Different roles have different operation interfaces. Among them, students, teachers and department directors mainly operate their corresponding businesses through the foreground interface, while administrators mainly manage relevant information and system settings through the background interface. (2) Business logic layer Business logic layer is the most core layer. As the key link between the preceding and the following in data, it mainly focuses on business rule making, business process implementation and other system design related to business requirements. This role forms a connecting piece between the previous layer and is mainly used in business logic processing system to realize various functions of the system. This layer mainly deals with the user’s request, and can query and embezzle the database when necessary. Major functions in this system, such as topic management, office, background information management, are realized by this layer. (3) Data access layer The function of data access layer is to access database data, read and transfer data. The business logic layer processes the business logic. If it needs to call the data, it can pass the request to the data layer, and the data layer receives the request and responds to the request through the operation of the database. Then the processing results are returned to the business layer, and the final results appear to the user interface layer after the corresponding business processing in the business layer. Because the system involves a variety of functions, access to the database and operation will be more, so the design of an independent layer dedicated to processing data,
Design and Implementation of College Student Information Management System
145
to improve the performance of the system will be of great help. In order to improve the access efficiency, the system designs a unified data access interface. For complex and large amount of data calculation, the database system can process efficiently by writing stored procedures. 2.2 Function Analysis of Information System This system is designed according to the actual needs of the CS management, to meet the needs of resource sharing. The content is to manage and maintain all kinds of information of students in school, and the purpose is to realize the scientization and standardization of student information management. The functional requirements of the system include the corresponding functional requirements of system administrators, student managers and students. According to different permissions, students have the lowest permissions, and system administrators have the highest permissions. The system properties have been modified and the functions of other users have been set. The system identifies three types of users: students, student administrators and system administrator users. The system administrator has the highest authority and is responsible for the maintenance and management of the server. Users can add and directly access the database; Second, student management personnel to access the database management department of science and authority part; Third, students can access the system, simple query. 2.3 Function Module Design (1) Student basic information management The main function of students’ basic information management module is to input students’ basic information completely, and query information according to various keywords at any time. Every year after the freshmen enter the University, the staff of the college can sort out the student information of the college and input it into the student IMS. Once the information is entered into the system, students have no right to modify their own information. If you find that your information is wrong or changed, you can contact the staff of the college to modify it. (2) Student curriculum management The main function of the student course management module is to manage the students’ course information in a unified way. Like the first mock exam system, the key content of this module is the query function. Users should be able to realtime and quickly query the information of students’ courses. In the student course management module, course selection system is a very common and indispensable part. After students log in to the course selection system, the course selection system will give students a list of courses they can choose, and students can query the detailed information of courses through the list. Generally speaking, students cannot choose nonprofessional courses after they have learned enough professional courses, which can greatly improve the comprehensive quality of students. When the students complete the selection, they can submit the course selection content to the server. Once the course selection content is submitted successfully, the system will modify the students’ data accordingly.
146
Y. Yu
(3) Performance management The content of the score management module is mainly the score entry, addition, deletion, modification and so on. After each examination, the teacher will input the scores of each student into the system. Student users can log in to the system in the browser and query their corresponding course scores. The authority is assigned as follows: the management staff can modify the students’ course scores and enter the relevant printing functions, the teacher can view the course scores of all the students in the subordinate classes, and the students can only view their own course scores with their own account and password. (4) Performance management of physical health test The main function of the physical health test score management module is to input students’ physical health test scores. Nowadays, the physique of CSs is getting weaker and weaker, so school education must play the slogan of healthy new trend again, actively carry out physical exercise activities, and strive to improve students’ physique. Physical health test results management module can record students’ physical test results at any time, whether it is schools, teachers, students and parents can intuitively understand the current physical condition of students, so that students can participate in exercise more actively. (5) Scholarship management The main function of scholarship management module is to count students’ family information, vote information and publish the list of scholarships. In order to encourage CSs to study more diligently, almost all colleges and universities have established their own scholarship mechanism. It can be recommended by the class or the college, then voted by the students in the class, and finally approved by the teachers or management personnel to confirm the final scholarship list. (6) Tuition management The authority allocation of the tuition management module is as follows: the staff of the financial office can add, delete, modify, query and statistics the tuition information, and so on. Teachers and students can only query the tuition information, but cannot modify the information. (7) User management The system adopts B/S mode, which makes the school leadership office, academic affairs office, student office, college teaching, student management, etc. In order to ensure information security, according to the division of responsibilities between departments, to limit the user access to some modules. Only the system administrator can add the user’s permission assignment, user data query, modify and delete. Users can only modify their password in the system through the background, and cannot assign other permissions to them, nor can they change their system roles. Administrators cannot change the user’s password, but they can assign permissions to the user. The security of the system is guaranteed, and the access of users is restricted by the strict distinction between users. 2.4 IMS Algorithm Student achievement is the most important part of student IMS, an important part of teaching quality evaluation, and an important measure of whether students master the
Design and Implementation of College Student Information Management System
147
professional knowledge. In this paper, the cluster analysis method is used to comprehensively evaluate the students’ performance, which provides a reliable basis for the evaluation of students, objectively analyzes the advantages and disadvantages of students’ performance in various disciplines, excavates the students’ personality ability behind the performance of various disciplines, and provides a scientific reference and basis for the improvement of our teaching methods in the future. Suppose we divide the students into t classes according to the score data of n courses of m students, and the score of the j course of the I student is T, then the average score of the j course is: Xj =
1 m Xη η=1 m
(1)
The sample range is: RJ = Max xη − Min xη i≤i≤m
1≤i≤m
(2)
The results of standardization are as follows:
xη = (xη − X j)/RJ
(3)
3 Experimental Study 3.1 Subjects The university student IMS is the focus of this paper. In order to study and understand the university student management system, this paper designs and implements a set of university student IMS based on Web. The system is put into the experimental operation test to test the module performance of the IMS. 3.2 Experimental Process Steps This paper mainly studies the design and implementation of CS IMS based on Web. In this paper, the information system structure framework is designed, using the B / S three-tier architecture, the design of user interface layer, business logic layer and data access layer. Then the information system function analysis, function module design, clearly constructed the hierarchical structure of the system and each business function module. This paper also gives a comprehensive evaluation of students’ performance by cluster analysis, and gives a scientific comprehensive ranking of students. This paper puts the designed IMS into operation test, analyzes the proportion of its modules and explores its loopholes.
148
Y. Yu
4 Experimental Research and Analysis of University Student IMS 4.1 Proportion Analysis of Information Management Module According to the design idea of IMS architecture, the hierarchical structure of the system and each business function module are constructed. The functional structure division is mainly designed according to the system functional requirements and the principle of convenience and efficiency. In order to study the proportion of each module function in the overall IMS, this paper puts the IMS into operation test, collects and sorts out the proportion of each module function, and the results are shown in Table 1. Table 1. Analysis of the proportion of information management module Essential Student Student Physical Scholarship Tuition User information courses achievement health management Proportion 23.85
18.79
18.46
5.33
3.47
10.38
19.72
Fig. 1. Analysis of the proportion of information management module
As can be seen from Fig. 1, in the operation of information management module, the highest proportion is the basic information module, accounting for 23.85%, followed by the user management module, accounting for 19.72%, the student curriculum module, accounting for 18.79%, the student achievement module, accounting for 18.46%, the physical health module, accounting for 5.33%, the scholarship management module, accounting for 3.47%, and the tuition module, accounting for 10.38%. 4.2 Test and Analysis of IMS Module After the design and implementation of the IMS framework, in order to study and understand the university student management system, this paper puts the university student IMS based on web into the experimental operation test, and tests the module performance and defects of the IMS. The number of vulnerabilities found in each module is shown in Table 2.
Design and Implementation of College Student Information Management System
149
Table 2. Analysis of module vulnerability testing in IMS Essential Student Student Physical Scholarship Tuition User information courses achievement health management Defects 4 number
2
1
0
1
0
3
Fig. 2. Analysis of module vulnerability testing in IMS
As can be seen from Fig. 2, among the test results, the basic information management module has the most defects, followed by the user management module. There are two defects in the student course module, one in the student achievement module and scholarship module, and no in the physical health module and tuition module.
5 Conclusions The CS management information system helps the student management department to comprehensively, accurately and timely grasp the situation of each student in the whole process from entering school to leaving school, realize the collection, collation and feedback of each student’s information, and timely make statistics and summary of each student’s basic information, so as to provide the required data for student management, it provides a basis for the decision-making of student management. Therefore, after the development of CS management information system project is completed, it will be put into operation directly, so as to solve the problems of inconsistent data and information, untimely statistics, low efficiency and chaotic information management in the past student management.
References 1. Wang, S., Tang, Q.: Construction of guiding system for growth and development of college students under the student-oriented concept. Asian Agric. Res. 10(05), 93–95 (2018) 2. Alkhaldi, A.N., Al-Sa’Di, A.: Gender differences in user satisfaction of mobile touch screen interfaces: university students’ service sites. Int. J. Innov. Technol. Manag. 15(6), pp.1950003.1–1950003.21 (2018)
150
Y. Yu
3. Leeder, C., Shah, C.: Collaborative information seeking in student group projects. Aslib J. Inf. Manag. 68(5), 526–544 (2016) 4. Rajmane, S.S., Mathpati, S.R., Dawle, J.K.: Digitalization of management system for college and student information. Res. J. Sci, Technol. 8(4), 179–184 (2016) 5. Rinehart, J.B., Lee, T.C., Kaneshiro, K.: Perioperative blood ordering optimization process using information from an anesthesia information management system. Transfusion 56(4), 938–945 (2016) 6. Weissmann, J., Mueller, A., Messinger, D.: Improving the quality of outpatient diabetes care using an information management system: results from the observational vision study. J. Diabetes Sci. Technol. 10(1), 14–17 (2016) 7. Lu, X.Y.: Development of an excel-based laboratory information management system for improving workflow efficiencies in early ADME screening. Bioanalysis 8(2), 99–110 (2016) 8. Liu, X., Zhu, Y., Ge, Y.: A Secure Medical Information Management System for Wireless Body Area Networks. KSII Trans. Internet Inf. Syst. 10(1), 221–237 (2016) 9. Yang, S., Yang, Y., Gao, T.X.: Exploration of ward medication orders audit mode based on clinical pharmacists information management system. Pharm. Care Res. 17(1), 54–57 (2017) 10. Bandla, S., Galimberti, D., Kopp, K.: Oncomine knowledgebase reporter: an information management system to link published evidence with cancer gene variants detected by multivariate tests. J. Clin. Oncol. 34(15), 20667 (2016) 11. Alameri, I.A., Radchenko, G.: Development of student information management system based on cloud computing platform. J. Appl. Comput. Sci. Math. 11(2), 9–14 (2017) 12. Yang, P., Sun, G., He, J.: A student information management system based on fingerprint identification and data security transmission. J. Electr. Comput. Eng. 2, 1–6 (2017)
Design and Development of Distance Education System Based on Computing System Xiaoxiao Wei(B) , Jie Su, and Lingyi Yin HaoJing College of Shaanxi University of Science and Technology, Xi’an, Shaanxi, China
Abstract. Distance education is a new form of education which has been evolving in the field of contemporary education. It has been developed vigorously under the promotion of Internet education platform. Distance education also promotes the cooperation and exchange of traditional teaching in different regions. It does not conflict in function, nor does it form commercial competition. It is a good complementary mechanism of education. The purpose of this paper is to design and implement the platform of distance education, so that learners can flexibly arrange their time for education according to their own situation and improve their learning efficiency. This paper analyzes the problems in the development process from the perspective of the practical application of software, describes and evaluates the specific requirements of the three roles of students, teachers and managers, and makes improvement and improvement. In the case of realizing the basic functions of distance education platform, a new evaluation system based on test paper generation algorithm is proposed and applied to the system to realize the function of online random test. Based on the traditional J2EE architecture, the B/S model based on web is adopted. The designer realizes a functional module which takes students, teachers and administrators as the main roles. Finally, after we have realized all the functions of remote sensing distance education platform, we have checked and tested its system function and software performance. It is found that when the system carries out information transmission with no more than 400, the feedback speed of the system or users is not high, usually less than 0.5 s, which indicates that the platform has good performance and meets the teaching and management requirements of distance education. Keywords: Computing system · Distance education · Paper formation algorithm · B/S mode
1 Introduction The great progress of computer information technology and network communication has brought great changes and challenges to many industries in today’s society, and education is also facing opportunities and challenges of reform [1, 2]. With the development of computer information technology and the popularization of application, network distance education came into being. Distance learning has become the beneficiary of this method, which greatly saves time and energy [3, 4]. In general, the distance education © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 151–159, 2022. https://doi.org/10.1007/978-981-16-5857-0_19
152
X. Wei et al.
system provides the function module with clear division of labor, realizes the separation of teaching in the system, and realizes the diversified mode of teaching interaction. It is a new form of expanding the teaching scale of our school, with students’ autonomous learning as the main body and interactive Q & A and discussion environment between teachers and students [5, 6]. On the design of distance education system, many scholars have different forms of research. For example, some scholars have developed a set of distance education system based on B/S architecture, using J2EE development platform, Java programming language and SQL Server database as development tools. Through virtual laboratory technology, the teaching quality of distance education engineering courses in Colleges and universities can be improved, and the service for teachers and students can be better [7]. Some scholars also add the eye location algorithm in the distance education system, which provides the basis for the eye location of the monitoring module in the system, so that the learning state can be determined by the time when the eye is located in a certain period of time [8]. The main task of this paper is to analyze the characteristics of the system and the specific needs of students, teachers and administrators. Then, we make a detailed design for each functional module, which is the test paper generation algorithm. Then based on the different requirements of functional modules for specific implementation and development platform. Finally, a function check and performance evaluation are made to the system, and the existing vulnerabilities are corrected.
2 Analysis of Distance Education System 2.1 Demand Analysis of Distance Education System (1) Student User Needs In personal information management, users can modify, save and confirm the name, gender, email, mobile phone, telephone, etc.; In job management, users can submit jobs and delete submitted jobs; In course management, users can view their own courses and download learning materials; for personal station letters, users can view station letters, edit station letters, delete station letters, etc.; for online examination, students can take the examination in the learning process or teachers can arrange the corresponding examination; for notification management, users can view the notification sent by administrator or teacher users; in the performance management function, users can view their own test results. Student role users can add, delete, modify and query information within the scope of their own roles. The system provides a good interactive system, students can contact each other and learn according to the built-in personal letters and e-mails. Through the online examination, we can know our knowledge level in time, and the results can also be used as the reference and improvement of teachers’ teaching in the future. (2) Teacher User Needs After logging in the system, teachers can perform related operations, including: modifying personal basic information, homework management, course management, personal station letter, online examination, notice announcement, score management.
Design and Development of Distance Education System
153
(3) User Management Function Requirements For the ordinary users of the distance education system, the requirement of user management is based on the login information provided by the system, that is, the user account and password. They can log in to the system, modify personal user information and exit the system by themselves [9, 10]; for the administrator users of the online examination system, the requirements of user management are that in addition to the needs of ordinary users, they should also have the functions of searching, adding, modifying, deleting and setting the permissions of ordinary users. In order to ensure the security of the online examination system, prevent illegal users, malicious users from misoperation and malicious operation to damage the system, different users are set different access rights. (4) Functional Requirements Analysis of Distance Education System The development and design of distance education system need to meet the common needs and environmental needs. The function requirement refers to that the students can carry out unimpeded autonomous learning at any time and any place according to their own needs, and the administrators can manage the students’ information through their authority, such as course selection information, examination information, homework management and basic information modification; in addition, it should have examination function and navigation function, and provide as many multimedia elements as possible under the premise of friendly interface and strong experience. Environmental requirements refer to the development environment of the system. Combined with the author’s selection, the server can choose Windows 2000 operating system, IIS5.0 web browser, and the database can choose SQL2000/2005. The system requirements of the client should at least meet the browser version above IE6.0 [11]. 2.2 Design of Distance Education System (1) System Functional Structure Design The system of distance education developed in this paper consists of six modules: first, it is used for login module, second, management function module, third, student function module, fourth, teacher function module. (2) Design of Student User Function In the aspect of modifying personal basic information, students can add, delete, modify and check the user’s own information to ensure the timely accuracy of information. In the aspect of homework information management, the student role of homework management function is mainly to add, delete, modify and check a series of homework submitted by oneself. In the aspect of curriculum management, the student role user can perform a series of operations on their curriculum table in the course management module after login. In the field of station information management, students can use the system to send information to relevant personnel by using the system. In the aspect of online testing, the user of student role can perform self-test after logging in the system or test the related subjects at the request of the teacher.
154
X. Wei et al.
(3) Design of Teacher User Function In the aspect of modifying personal basic information, teachers can add, delete and check the user’s own information to ensure the timely accuracy of information. In the aspect of homework management, the teacher user can manage the homework of each course after logging in the system. In the aspect of curriculum management, teachers can establish their own courses after logging in for the role users to add courses that have not been added, and administrators can also assign the courses established by teachers to the students’ role users. (4) Functional Design of Management Personnel The management personnel have the maximum authority for the system. In the design, all the function modules should be connected with the functions of the management personnel, including the modules of curriculum, teachers, students and examinations. In addition, in the process of management, it mainly includes the function of adding and deleting, and the modification function cannot be set, including class, course, examination and name for students; among them, the teacher management includes two function modules: adding teachers and deleting teachers. 2.3 Design of Test Paper Generation Algorithm Difficulty Modeling of Test Paper Generation Algorithm. According to the distribution of normal probability, the distribution map can be divided into five parts: easy, easy, medium, difficult and difficult. YH =
n−1 k=1
Xk +
XN 2
The estimation formula of the difficulty level of the same paper is as follows: n Z= Xk × Yn k=1
(1)
(2)
The formula of discrimination is as follows: Di = Hi − Pi
(3)
3 The Realization of Distance Education System 3.1 Development Environment of Distance Education System Software environment: the software environment on the server side is operating system (Windows 7), Java virtual machine (jdk1.7), application server (Tomcat 7.0), and database management software. Hardware environment: the database server needs 1 2.7 ghz processor, 6G memory, 500 g of the SATA hard disk. The client needs hard disk with a memory capacity of more than 2G and more than 100 g.
Design and Development of Distance Education System
155
3.2 Implementation Steps First, the main interface of distance education system is implemented, including page framework, link and navigation. Then, it realizes the system function modules, including registration login, user management, students function module, and paper group module. 3.3 Function Module Implementation (1) Realization of the Homepage of Distance Education System 1) Page framework The main page of the system includes three parts, namely layout, navigation and link. The layout depends on the frame structure of the page, which is a common way in web design, and can be cut by PS tools or three swordsman tools. 2) Links The links are divided into three types: one is correlation link, the second is annotation link, the third is structural link. 3) Navigation In the module navigation area, when the user logs in with different identities, the navigation columns displayed in the module navigation area are different, so that users can clearly understand what they can do and set links to the corresponding content pages, which will facilitate the user’s various behaviors in the process of use. (2) Realization of Registration and Login Functions 1) Registration function After the user submits the accurate registration information, the registration form will be submitted to the application. After that, the data submitted by the user will be preliminarily reviewed by the relevant software to determine whether the content is correct or wrong and whether the format is standardized. If there is any error or non-standard filling, the form will be returned and filled back. If it is correct, the page will give the user the correct reminder. 2) Login function Students log in correctly according to their own account number and password. After landing, carry out online homework and communicate with online teachers. (3) Implementation of User Management Module Candidates enter the online examination system student interface as students, can modify their login password, test answer and other operations. Teachers or managers enter the online examination system management interface as administrators to perform relevant operations, such as setting online test parameters, test question management, score query and management of system users. The system administrator users can manage and maintain the information of students and teachers, that is, add, query, modify and delete.
156
X. Wei et al.
(4) Realization of Student Function Module The administrator can enter or select to query student number, name, and test field number, and then click “view personal scores” to view individual test results. Through this operation, the test scores of the corresponding students in the result table can be viewed by clicking “view personal scores” to enter the corresponding report. (5) Realization of the Paper Group Module After the user login to the system, the student role user can take the online examination according to the teacher’s requirements, and the page displays the information of the questions and the time control.
4 System Test 4.1 Software Testing Steps The specific operation of black box test is to determine the test effect by inputting the correct value, checking whether the output value is correct or not, and determining whether the test result is consistent. The test steps are as follows: making test plan, establishing test environment, designing test cases, executing test and regression test. 4.2 Test Results (1) System Function Test There are many system functions in this system. In this paper, the key function modules are selected to develop the system test case design. At the same time, in the process of testing, equivalent partition and boundary value are used to achieve. The test sample selected for this test is resource management test. The test results are shown in Table 1.
Table 1. Resource management test case table Serial number
Test case content
Anticipate result
Is it up to expectations
1
Perform resource management functions
Display resource management interface
Yes
2
Operation knowledge management page
Show resource fill in page
Yes
3
Input knowledge and submit
Preservation of learning Yes resources
Design and Development of Distance Education System
157
(2) System Performance Test According to the requirements of network deployment, this case uses the LAN test and application platform. The specific test environment is shown in Table 2 server performance environment. Table 2. Server performance environment Equipment model
HPDL380G7
Number of server CPUs
CPU*2
Server CPU model
E56452.4 gbmhz (6-core chip)
Hard disk space
Enterprise class SATA hard disk 500 g × four
software environment
ACCESS
Software server
IIS 6.0 (extended support)
The professional software testing tool mercury LOADRUNNER 12.0 is used to measure and test the system parameters. Before the test, the system needs to complete the data record, delay and parameter response time through the concurrent request of software construction. The detailed data is shown in the system performance test results in Fig. 1.
response time
System delay
Packet loss rate
3.5 3 2.5 2 1.5 1 0.5 0 10 69 130 191 253 314 375 434 495 556 618 679 740 799 860 921 983
Concurrent number Fig. 1. System performance test results
Through the above related detection data, we can know that in the current application environment, when the system transmits information at no more than 400, the feedback speed of the system or users is not high, usually less than 0.5s. Of course, the system also does not produce packet loss phenomenon. This shows that the system can achieve fast feedback for user’s operation, and will not lose packets. The number of concurrent school applications of this software is generally not more than 400, therefore, when the
158
X. Wei et al.
software is applied in schools, even if there is a certain delay or packet loss, it will not have any negative impact on the user’s operation. The number of clicks per second can directly reflect the number of clicks per second of users on the server, which is convenient to estimate the user load on the server according to the number of clicks and assist in judging whether the server can bear the corresponding load.
hits per second
200 180 180 160 hits per second
140 120 90 91
100
75
80 60
83 81 83 80 82 79 73 68
54
50
41
40
31 15
20 0 0 0 : 0 00 0 : 0 80 0 : 1 70 0 : 2 40 0 : 3 40 0 : 4 20 0 : 5 00 1 : 0 00 1 : 0 80 1 : 1 70 3 : 4 00 3 : 4 80 3 : 5 80 4 : 0 80 4 : 1 60 4 : 2 30 4 : 3 4
project time used Fig. 2. Click number trend chart
It can be seen from Fig. 2 that the normal operation is about 40 clicks per second, and the maximum can only reach about 72. Through this test, the number of clicks per second is about 80. Therefore, the number of hits per second fully meets the requirements of the system.
5 Conclusion This paper uses SQL and asp.net technology, combined with HTML, CSS and other web technology to design and implement a distance education system based on computing system. After running, the system is stable, and all functions are normal, which can meet the needs of users. It has a certain practical value in the educational administration of colleges and universities, and will have a certain impact on the educational administration in the future. Although the system has been running normally after the functional test, due to the limited time and ability, the distance education system still has many defects, such as: the system does not consider the use of Oracle database management system.
Design and Development of Distance Education System
159
References 1. Omeroglu, E., Kelesoglu, F.: Investigation of the academic motivations of the students studying through distance education system in terms of some variables/a case study of Sakarya University. Int. J. Sci. Res. and Manag. 8(1), 607–615 (2020) 2. Khan, R.: The need for digital library resources in the distance education system in India. Int. J. Res. Libr. Sci. 6(1), 93 (2020) 3. Egorova, T.M., Belukhina, N.N., Akhmedzyanova, T.S.: Methodology and methods of training children with disabilities in an inclusive distance education system. Open Educ. 22(6), 4–13 (2019) 4. Lazarus, D.: Student self-responsibility in the indonesian distance education system. JAS-PT (Jurnal Analisis Sistem Pendidikan Tinggi Indonesia) 1(2), 69 (2018) 5. Naghavi, S., Moradi, S.: The usage of electronic resources in the Iran distance education system. Acquisitions librarian 30(2), 106–108 (2018) 6. Hao, W., Wang, Y., Fan, Z., et al.: Research and implementation of composite paper generation algorithm for distance education system. Information and communication, (011), 27–28 (2017) 7. Ahmed, W., Parveen, Q., Dahar, M.A.: Role of learning management system in distance education: a case study of virtual university of Pakistan. Sir Syed J. Educ. Soc. Res. (SJESR) 4(1), 119–125 (2021) 8. Bazarbaevna, B.S.: Effective use of electronic and distance learning to increase the number of students in the higher education system by correspondence course. ACADEMICIA Int. Multidisc. Res. J. 10(6), 359 (2020) 9. Yang, X.: Research on construction of on-line teaching evaluation index system of modern long distance education. Vocat. Educ. Res. (006), 16–19 (2018) 10. Song, W., Wang, L., Ranjan, R., et al.: Towards modeling large-scale data flows in a multidatacenter computing system with petri net. IEEE Syst. J. 9(2), 1–11 (2017) 11. Vostokin, V.S.: The templet parallel computing system: specification, implementation, applications. Procedia Eng. 201, 684–689 (2017)
Frequency Domain Minimum Power Undistorted Beamforming Algorithm Based on Full Matrix Acquisition Zhihao Wang(B) , Ying Luo, and Zhaofei Chu Jiangsu University, Zhenjiang, Jiangsu, China [email protected]
Abstract. In order to solve the problem that the performance of the minimum power distortionless response (MPDR) beamforming algorithm is reduced due to the inaccurate autocorrelation matrix information when imaging multiple damages, a minimum power undistorted beamforming algorithm in frequency domain based on full matrix acquisition was proposed. According to the characteristic of linear independence of the data collected by the full matrix, the information of autocorrelation matrix is increased, and the mechanism of interference suppression in imaging is analyzed theoretically. Finally, the experimental platform was set up to carry out the double damage imaging experiment, and the experimental results demonstrated that the MPDR algorithm based on full matrix acquisition detection scheme had good performance in the case of double damage. Compared with the frequency-domain full-focus and superimposed MPDV imaging methods, this method can effectively improve the imaging resolution and suppress the influence of side lobes, and has some practical application significance. Keywords: Full matrix acquisition · Autocorrelation matrix · MPDR · Damage imaging
1 Introduction Ultrasonic phased array detection technology can realize directional focusing scanning of ultrasonic beam by applying different delay time to each element, which is a fast and effective ultrasonic detection method. Adaptive beamforming technology has obvious advantages over traditional methods because it can dynamically adjust the weighted value according to the echo data received by the array. The MPDR beamforming algorithm has the directivity without distortion, does not need prior information, and can suppress the influence of interference, which is especially suitable for ultrasonic imaging [1]. For the detection of single damage, the MPDR algorithm can effectively improve the resolution of the damage area and suppress the artifact caused by side lobe. However, when there are multiple defects in the detection area, the characteristic information of the autocorrelation matrix is not accurate due to the lack of information in the detection area, which leads to the performance degradation of MPDR algorithm. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 160–168, 2022. https://doi.org/10.1007/978-981-16-5857-0_20
Frequency Domain Minimum Power Undistorted Beamforming Algorithm
161
To solve the problem, Engholm et al. [2–5] adopted spatial smoothing technology to reduce the coherence between sources and increase the information content of autocorrelation matrix. However, this method sacrifices the array aperture, and it is originally designed for the far-field environment, the imaging quality will decrease for the array with fewer elements and the damage in the near-field region. There are also some nonspatial smoothing algorithms, which mainly include: linear constraints imposed on the coherent interference [6], the received data transformation pretreatment projection [7, 8], transformation and split phase transformation [9], although these method do not need to sacrifice the array aperture and can achieve coherent solution to ensure the accuracy of autocorrelation matrix characteristic information, but it is necessary to estimate the spatial location information of each coherent source in advance that led to the complex implementation process. This paper will reconstruct the autocorrelation matrix based on the data form the full matrix, and increase the information of the autocorrelation matrix by taking advantage of the characteristic of linear independence among the data of each group of the full matrix, so as to improve the effectiveness of the algorithm in the case of multiple damage.
2 Theory Introduction and Algorithm Optimization 2.1 MPDR Weight Calculation Based on Full Matrix Acquisition In the sensor array, the elements are excited one by one, and the array collects data in turn. This form of multi-element transmitter and multi-element receiver is called full matrix acquisition. According to literature [10, 11], the frequency domain vector of the signal collected by the full matrix can be expressed as: D Xi (ω) = Sd (ω)vm,i (ω, zd ) + Ni (ω)# (1) d=1
m
Where, D is the number of spatial scattering points and m is the Lamb propagation mode. Sd (ω) = Rdm Hm (ω)T(ω), Rdm represents the scattering coefficient of the scattering point d to the Lamb wave of mode m. Hm (ω) represents the excitation coefficient of the ultrasonic Lamb wave when the angular frequency of the excitation transducer is ω and the mode is m. T(ω) represents the frequency domain form of the excitation signal. Ni (ω) is the frequency domain noise at the angular frequency ω. i = 1, 2 · · · M, represents the sequence number of excitation array elements. v is the steering vector, which can be specifically expressed as: ⎡ √ 1 e−jk m (ω)(zdi +zd1 ) ⎢ rdi1rd1 −jk (ω)(rdi +rd2 ) m ⎢ √ ⎢ rdi rd2 e vm,i (ω, zd ) = ⎢ # (2) .. ⎢ . ⎣ √ 1 e−jkm (ω)(rdi +rdM ) r r di dM
Substituting Eq. 2 into Eq. 1, we can get Eq. 3: Xi (ω) =
D Sd,m (ω)e−jkm (ω)rdi vˆ m (ω, zd ) + Ni (ω)# √ d=1 m rdi
(3)
162
Z. Wang et al.
The steering vector is simplified as: ⎡ ⎢ ⎢ ⎢ vˆ m (ω, zd ) = ⎢ ⎢ ⎣
√1 e−jk m (ω)rd1 rd1 √1 e−jk m (ω)rd2 rd2
.. .
#
(4)
√ 1 e−jk m (ω)rdM rdM
The beam synthesis output of the frequency domain signal is expressed as: Y(z, ω) =
M i=1
wiH (ω, z)Xi (ω)#
(5)
∗ , w∗ ...w∗ ]H , represents the weighted value of the ith group of frequency wi = [wi1 i2 iM domain signals, which can be calculated in the MPDR algorithm. The formula for calculating the MPDR weight under full matrix acquisition is as follows:
wi (ω, z) =
Rx−1 vm,i (ω, z) # H vm,i (ω, z)Rx−1 vm,i (ω, z)
(6)
In this paper, the calculation formula of R is: Rx =
M i=1
Xi (ω)XiH (ω) + σ2 I#
(7)
Where σ2 is a diagonal loading factor. In order to reduce the error in the process of matrix inversion and retain the characteristic information of the original autocorrelation matrix, the value of σ2 refers to the literature [11, 12] and the calculation formula is: x) std(diag(Rx )) + trace(R M # (8) σ2 = 2
2.2 Algorithm Optimization As can be seen from Eq. 5, at the same frequency ω and the same position z, each group of full matrix data needs to recalculate the weight wi (ω, z) and the steering vector vm,i (ω, zd ), which consumes a lot of computing resources. In order to simplify the calculation of weight, the following deduction can be made: wi (ω, z) = =
R−1 x vm,i (ω, z) vH z)R−1 (ω, x vm,i (ω, z) m,i
e−jk√m (ω)zdi −1 Rx vˆ m (ω, z) zdi
m (ω)zdi H ejk√ e−jk√m (ω)zdi vˆ m (ω, zd )R−1 vˆ m (ω, z) x zdi zdi √ R−1 vˆ m (ω, z) = zdi e−jkm (ω)zdi H x vˆ m (ω, z)R−1 ˆ m (ω, z) x v
Frequency Domain Minimum Power Undistorted Beamforming Algorithm
=
√
zdi e−jkm (ω)zdi w(ω, ˆ z)#
Through the above derivation, we can get: √ ˆ z)# wi (ω, z) = zdi e−jkm (ω)zdi w(ω,
163
(9)
(10)
We can get that, for different groups of frequency domain signal processing, there ˆ z). On this is no need to calculate the weight wi (ω, z), only need to modify the w(ω, basis, Eq. 5 can be optimized as follows: Y(ω, z) =
M
wH i (ω, z)Xi (ω)
i=1 M √ zdi ejkm (ω)zdi w = ˆ H (ω, z)Xi (ω) i=1
=w ˆ H (ω, z)
M √ zdi ejkm (ω)zdi Xi (ω)#
(11)
i=1
According to Eq. 11, the output of beam synthesis can be calculated by firstly summing the weighted frequency domain signals from different group, and then multiplying the results with the calculated weights w ˆ H (ω, z).
3 Analysis of Imaging Algorithm 3.1 Imaging Process of MPDR Algorithm Based on Full Matrix Acquisition The imaging index can be solved according to the beam synthesis value obtained in Eq. 5. 1 +∞ jωt # I(z) = Y(ω, z)e dω (12) 2π 0 t=0 Then the overall process of MPDR algorithm under full matrix acquisition can be summarized as Fig. 1.
Fig. 1. MPDR imaging flow chart under detection scheme of full matrix acquisition
164
Z. Wang et al.
3.2 Analysis of MPDR Algorithm Characteristics Under Full Matrix Acquisition The purpose of this part is to illustrate the feature of MPDR algorithm under full matrix acquisition scheme, and detailed the principle of interference suppression of the algorithm in the case of different number of passive scattering sources in the plate structure (scattering source is passively generated because of the interaction between excitation signal and damage, so it is called passive scattering source). According to the foregoing, when the ith element is emitted, the frequency domain signal form of the single mode data received by the array can be expressed as Eq. 3. It can be further deduced that: Xi (ω) =
D d=1
Sd,m (ω)vm,i (ω,zd ) + Ni (ω)
D Sd,m (ω)e−jkm (ω)zdi vˆ m (ω, zd ) + Ni (ω) √ zdi d=1 D ad,m,i (ω)ˆvm (ω, zd ) + Ni (ω) =
=
d=1
(13)
Where: aI,d,m (ω) =
Sd,m (ω)e−jkm (ω)zdi # √ zdi
(14)
The simplified steering vector vˆ m (ω, zd ) can be used as a feature vector to define the passive scattering source location. By Eq. 13, all the steering vectors of passive scattering sources are linearly independent, and any Xi (ω) and Xj (ω) is also linearly independent. Assuming that M ≥ D, D is the number of passive scattering sources in space. In order to facilitate subsequent analysis, all of the passive scattering source steering vector vˆ m (ω, zd ) in Eq. 14 are replaced by [v1 , v2 · · · vD ]. At the same time, the steering vector coefficient ai,d,m (ω) in Xi (ω) is expressed by [a1,i , a2,i · · · aD,i ]. Equation 15 can be obtained: X(ω) = [X1 (ω), X2 (ω) · · · XM (ω)] ⎤ ⎡ a1,1 · · · a1,M ⎥ ⎢ = [v1 , v2 · · · vD ]⎣ ... . . . ... ⎦ + [N1 (ω), N2 (ω) · · · NM (ω)] aD,1 · · · aD,M
= VA + N
(15)
Where the rank of matrix A and matrix V are D. When the noise Ni (ω) is assumed to be Gaussian white noise, Rx is further deduced as follows: Rx =
M
2 Xi (ω)XH i (ω) + σ I
i=1
= VAAH VH + NAH VH + VANH + NNH + σ2 I
H H 2 2 ≈ VAAH VH + σN I + σ2 I = VBAA BH V + σN + σ2 I
Frequency Domain Minimum Power Undistorted Beamforming Algorithm
H 2 = VCCH V + σN + σ2 I
2 ˆ V ˆ H + σN + σ2 I = V
165
(16)
In the above equation, considered that N is a Gaussian white noise matrix, we can 2 I, where σ2 is the noise power. V get that NAH VH ≈ 0 and VANH ≈ 0, and NNH ≈ σN N can be changed to be V by normalizing and orthogonalizing, which can be expressed as V = VB. Since BAAH BH is the quadratic matrix with full rank of D × D dimension, it can be transformed as BAAH BH = CCH , where C is orthogonal matrix, is diagonal matrix and its diagonal elements are eigenvalues for BAAH BH . In addition, there is ˆ is also orthogonal and normalized. Then, Rx can be ˆ = VC, the column vector of V V eigen decomposed as follows: M
D 2 2 λi + σN + σ2 vˆ i vˆ iH + σN + σ2 ci cH (17) Rx ≈ i # i=1
i=D+1
ˆ Where λi is the eigenvalue in the ith line for , vˆ i is the ith column vector in V, space. [ˆv1 , vˆ 2 · · · vˆ D , cD+1 · · · cM ] constitute a set of orthogonal basis of M dimensional H c Considered in the case of the high signal-to-noise ratio, and according to M i=D+1 i ci = D I − i=1 vˆ i vˆ iH , following is available: M 1 1 H v ˆ v ˆ + ci cH i 2 + σ2 i i 2 + σ2 σ λ + σ i N N i=1 i=D+1 D D 1 1 H H I− = vˆ vˆ + 2 vˆ i vˆ i 2 + σ2 i i σN + σ 2 λ + σN i=1 i i=1 D 1 λi H = 2 vˆ i vˆ i I− 2 + σ2 σN + σ 2 λi + σN i=1 M D 1 1 H ≈ 2 v ˆ v ˆ ci cH I − = i i i 2 + σ2 σN + σ 2 σ N i=1 i=D+1
R−1 x ≈
D
(18)
Source subspace consisted by [ˆv1 · · · vˆ D ] is orthogonal with noise subspace consisted by [cD+1 · · · cM ] to each other, at the same time any of passive scattering source steering vector vm,i (ω, zd ) belong to Source subspace, so there are: Rx−1 vm,i (ω, zd ) ≈
1 M ci cH i vm,i (ω, zd ) = 0# i=D+1 σ2
(19)
When the focal point is zc , and zc = zd , the incoming wave gain of MPDR weight for arbitrary scattering source position zd is calculated as: H A(zd ) = wm (ω, c)vm (ω, zd ) =
H (ω, z )R−1 v (ω, z ) vm,i c x m,i d H (ω, z )R−1 v (ω, z ) vm,i c x m,i c
≈
(20)
According to the above analysis, when the focus is not aligned with the passive scattering source, the gain of weight at zd , that is A(zd ) ≈ 0. While the gain of the
166
Z. Wang et al.
incoming wave at the focal point zc , that is A(zc ) = 1. That is to say, in the imaging scanning process, it is guaranteed that the scattered signal from the scanning position can be output without distortion, and the signal from other places can be suppressed by the calculated weight. So that the beam synthesis output is close to the real situation. The MPDR algorithm based on detection scheme of full matrix acquisition can effectively improve the imaging resolution and suppress the sidelobe artifact in the case of multiple damage.
4 Experimental and Result Analysis In this section, the ultrasonic phased array transmitting and receiving system developed by the research group is used as the experimental platform to carry out the imaging experiment of the double-damage full-matrix acquisition and detection scheme. The experimental platform is shown in the Fig. 2. With the center of the aluminum plate as the coordinate origin, the circular hole damage was prefabricated at the coordinates (50 mm, 100 mm) and (−40 mm, 120 mm), the radius of the circular hole was 4 mm, and 7 piezoelectric plates were arranged laterally on the X axis, the coordinates were located at (−24 mm, 0 mm) to (24 mm, 0 mm), the piezoelectric plates were spaced 8 mm apart, and numbered 1–7 from left to right. The experimental procedures are as follows: Set No. 1 piezoelectric plate as the signal excitation element to generate five-peak wave signals with the center frequency of 100 kHz through signal transmitting part of the DDS module. Through D/A conversion, low-pass filtering and high-voltage amplification, the signals are loaded into the piezoelectric plate to generate Lamb waves. After the completion of Step 1, the excitation piezoelectric plate is switched to the receiving mode, and the data acquisition system begins to work. The full array signal is stored into FIFO after passing through A/D conversion module, pre-filtering and signal amplification module, and then transmitted to PC through serial port for processing. The signal sampling rate is 2 MHz, and the sampling time length is 2 × 10−4 s.
(a) Instruments used in the experiment
(b) Array and damage layout schematic
Fig. 2. Experimental platform
Frequency Domain Minimum Power Undistorted Beamforming Algorithm
167
The remaining array elements were set as excitation respectively, and the steps 1 and 2 were repeated until the data collection of the full matrix was completed. MATLAB software in PC was used for imaging with frequency domain full focus, Superimposed MPDR, and Full matrix acquisition MPDR. The above is the experimental process of imaging using the full matrix acquisition and detection scheme in the case of double damage, and the imaging results are shown in Fig. 3. By comparing the API index [12] and damage location accuracy of each imaging result in the Figure, the results can be obtained as shown in Fig. 3 (Table 1).
(b) Frequency domain full focus imaging
(c) Superimposed MPDR imaging
(d) Full matrix acquisition MPDR imaging
Fig. 3. Double damage imaging results
Table 1. Comparison of imaging results Prefabricated damage (−40, 120)
(50, 100)
Damage position (mm)
API
Imaging algorithm
Result
Absolute error
Frequency domain full focus
(−39, 122)
(−1, −2)
4.623
Superimposed MPDR
(−40, 122)
(0, −2)
3.905
Full matrix acquisition MPDR
(−40, 123)
(0, −3)
2.125
Frequency domain full focus
(51, 104)
(−1, −4)
4.304
Superimposed MPDR
(51, 104)
(−1, −4)
3.420
Full matrix acquisition MPDR
(50, 103)
(0, 3)
1.769
As can be seen from the results in the table, in the left damage area, the damage API of the full matrix acquisition MPDR is reduced by 54% compared with the frequency domain full focus algorithm, and by 45.6% compared with the superimposed MPDR algorithm. In the right damage area, the damage API of the full matrix acquisition MPDR is reduced by 58.9% compared with the frequency domain full focus algorithm,
168
Z. Wang et al.
and by 48.3% compared with the superimposed MPDR algorithm. The MPDR algorithm reconstructed by the full matrix acquisition spectrum matrix can effectively solve the artifact caused by inaccurate spectral matrix characteristic information in superimposed MPDR imaging, and improve the transverse resolution of the damaged area. The above results demonstrate the effectiveness of MPDR algorithm based on full matrix acquisition in multi-damage imaging.
5 Conclusion In this paper, the autocorrelation matrix is reconstructed by using the data collected by the full matrix to contain the eigenvectors and eigenvalues corresponding to each source, which increases the information content of the autocorrelation matrix. The experimental results show that, in the case of double damage, compared with frequency domain full-focus algorithm, superimposed MPDR algorithm, the MPDR algorithm based on detection scheme of full matrix acquisition has better performance, effectively improves the transverse and longitudinal resolution of imaging results, and can effectively suppress the artifact caused by side lobe. This method is expected to provide a new reference for multi-defect high-precision imaging.
References 1. Shao, C., Zhao, N., Shi, C.: A class method of minimum variance distortion response beamforming. J. Xi’an Univ. Posts Telecommun. 19(03), 22–25 (2014) 2. Marcus, E., Stepinski, T.: Adaptive beamforming for array imaging of plate structures using lamb waves. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57(12), 2712–2724 (2011) 3. Gong, Z.: Study and experiment on adaptive beamforming algorithm for ultrasound imaging. Chongqing University (2016) 4. Wang, P., Xu, Q., Fan, W., et al.: Eigenspace-based forward-backward minimum variance beamforming applied to ultrasonic imaging. Chin. J. Acoust. 38(01), 65–70 (2013) 5. Yuan, J., Qin, Y.: The realization of ultrasonic adaptive beamforming algorithm based on spatial smoothing. Int. J. Ultrason. Technol. 12(1), 1–7 (2018) 6. Zhang, Y.: Research on virtual-array approach for airborne radar forward-looking superresolution imaging. University of Electronic Science and Technology of China (2018) 7. Lee, T.S., Lin, T.T.: Coherent interference suppression with complementally transformed adaptive beamformer. IEEE Trans. Antennas Propag. 46(5), 609–617 (1998) 8. Zhao, Y., Zhang, S.: Optimum beamforming for coherent signals. J. Commun. 02, 113–121 (2002) 9. Lu, M., He, Z.: Adaptive beam forming using split-polarity transformation for coherent signal and interference. IEEE Trans. Antennas Propag. 41(3), 314–324 (1993) 10. Engholm, S.: Adaptive beamforming for array imaging of plate structures using lamb waves. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57(12), 2712–2724 (2010) 11. Xu, Q.: Study on beamforming algorithm of ultrasound imaging. Chongqing University (2012) 12. Wen, J.: Research on phased array detection method of aerospace composite laminates drilling stratification defects. Nanchang Hangkong University (2017)
Balanced Optimization System of Construction Project Management Based on Improved Particle Swarm Algorithm Yilin Wang(B) Henan Technical College of Construction, Zhengzhou 450064, Henan, China
Abstract. Since the end of the 20th century, the construction industry has developed rapidly. Traditional construction project management involves three control goals: schedule, cost, and quality. With the continuous deepening of the concept of sustainability, the environmental protection goals are the same as the traditional three control goals. It is of great significance to put the multi-objective balanced optimization in the same important position so that the main control objectives of the project can be achieved better. In recent years, swarm intelligence algorithms have been widely introduced into the equilibrium optimization problem of engineering project management, and relatively satisfactory results have been achieved. Particle swarm algorithm is a non-numerical optimization calculation method based on the foraging process of birds and swarm intelligence. Since its proposal, it has received a lot of attention from scholars, and its application research has been solved from purely functional numerical optimization problems. It has penetrated into other fields. In view of this, by improving the level of project management and applying multi-objective optimization technology, my country’s economic development speed can be effectively improved. At present, our country has applied multi-objective optimization to the management of substation engineering projects, and conducted in-depth research on this as the development direction. The average value of Transaction per second is 65.21, which shows that the number of transactions processed by the system per second can well simulate real information query use cases. The average transaction response time is 84.21 s, which is too long. But if the peak response time of 170 s is not considered, the system transaction processing time will eventually stabilize between 30 s, which meets the performance requirements. Keywords: Particle swarm algorithm · Construction engineering · Project management · Equilibrium optimization
1 Introduction The construction process of the building is accompanied by the consumption of a large amount of human and material resources and has the characteristics of one-off. That is to say, there is no precedent that can be copied completely to complete a construction project, and there will not be the same copy. Therefore, from the beginning of the project © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 169–178, 2022. https://doi.org/10.1007/978-981-16-5857-0_21
170
Y. Wang
to in the process of completing this process, how to allocate resources reasonably, how to ensure the realization of project control objectives, and how to reduce the negative impact on the environment as much as possible is also a one-time irreversible process [1, 2]. This requires us to find and rely on advanced scientific management concepts and management techniques to achieve the main control goals of resource conservation, environmental protection and project management [3, 4]. In order to correctly handle the relationship between the various goals in the actual project, quantify the various goals of the project and obtain the balanced optimization plan of the construction project management as the basis for the target control of the project is an urgent need to be solved in the current project management field Question [5, 6]. Many foreign researchers have studied the relationship between construction project duration, quality, cost and other objectives under various assumptions. By establishing a comprehensive optimization model for each main control objective, various multiobjective solution methods are used to achieve multi-objective optimization [7, 8]. In the problem of solving the model, it has transitioned from the traditional multi-objective research and solution method to the advanced artificial intelligence evolutionary algorithm. Based on network planning technology (NPT), D Joshi proposed a method of optimizing the two by drawing the construction period-cost curve. Specifically, according to the goal of the construction period, the number of working days with the lowest construction period-cost slope was compressed on the key line of the project [9]. Abhilasha P et al. pointed out that the optimal use of resources should be considered in the design stage while considering the construction period, cost and quality, combined with Monte Carlo simulation method for construction period and cost data processing, using fuzzy numbers to estimate the quality, so as to carry out multi-objective optimization [10]. On the basis of clarifying the purpose and significance of the subject research and understanding the current research status at home and abroad, this paper first analyzes the project management objective system and the relationship between the objectives, and discusses the multi-objective optimization theory and the multi-objective problem solving method. On the basis of this, combined with the research background of sustainable development and the relationship between the goals in engineering practice, a construction period-cost-quality-environmental protection multi-objective optimization model is established [10, 11].
2 Balanced Optimization System of Construction Project Management Based on Improved Particle Swarm Algorithm 2.1 Demand Analysis The balanced optimization system of construction project management based on improved particle swarm algorithm mainly achieves the following goals: to realize the systematization, standardization and automation of various information; reduce labor costs, reduce management costs, and improve labor efficiency; provide accurate and comprehensive information to help provide a basis for leadership decision-making. The system mainly includes project cost management, warehouse management, supplier
Balanced Optimization System of Construction Project Management
171
management, material management and other functions, covering the main aspects of construction project management more comprehensively. At the same time, it provides functions such as personal information management and system management. The system adopts the B/S architecture and supports various mainstream browsers without the need to install a PC client; it also supports smart phone terminals and provides a mobile office platform. With the mobile terminal based on the Android operating system as the featured application platform, the system can manage the project site data and photos and automatically generate various reports, so as to achieve remote real-time tracking and monitoring of construction projects by the company’s leadership. (1) System participants Participating role (actor) is the person or thing that interacts with the system. Through the analysis of system requirements, the roles of participating systems and various subsystems are obtained. According to the user’s usage rights in the system, users can be divided into administrators, leaders, material informants, finance, ordinary employees, etc.; according to the user’s role in a specific project, users can be divided into project managers, workers, etc. Managers, interior managers, cost officers, technicians, purchasers, warehouse managers, security officers, etc. Among them, the administrator can participate in all subsystems and have all permissions, and only the administrator can participate in the user management subsystem; the leader can participate in all the subsystems, but only the viewing authority; the material information officer can participate in the material information subsystem; financial can participate in the payment subsystem; ordinary employees can participate in the personal information management subsystem. In the project material subsystem, the project manager can participate in all subdivided subsystems and has all the rights, and only the project manager participates in the project management subsystem and the sub-project subsystem; the budget subsystem and the settlement subsystem are managed by the cost officer participation; the required planning subsystem is participated by the foreman; the procurement subsystem is participated by the purchaser; the warehouse management subsystem is participated by the librarian. (2) System business analysis UML business modeling examples can have a general understanding of system functions through use case diagrams. For complex systems, the functions of the system can be described by means of hierarchical decomposition and step-by-step refinement. 1) Personal information management Personal information management is for users to view, modify and improve personal information, and modify passwords. 2) System management System administrators can add, delete, modify, and check leaders, material informants, finance, ordinary employees, and other users with different permissions that can use the system, manage their basic information, passwords, etc., perform permission assignments, customize displays, etc. The system function
172
Y. Wang
can precisely control the operator’s authority, and the user-friendly functions such as custom display can help the user to complete the work easily and efficiently. 3) Material information management Material information management includes two parts: supplier management and material management. Supplier management. The material information officer is responsible for supplier management and establishing a list of qualified suppliers and their database. Including the establishment of systems for the introduction and evaluation of qualified suppliers, the maintenance of supplier databases and the collection and sorting of supplier information. This functional module is used by the material informant to assist the material informant in the systematic management of qualified suppliers. Material management. The material information officer is responsible for the collection of market information, the collection of material market price information, and the collection of new materials and brand information, the establishment of a brand price information database, and the management of various materials, product materials and samples. Classify and sort various materials and samples, and establish a departmental database. This functional module is used by the material informant to assist the material informant to complete the work more conveniently and quickly. 4) Project material management Project material management includes two parts: project cost management and warehouse management. Project cost management. Project cost management is composed of several submodules such as project management, budget, demand planning, procurement, settlement, analysis, etc., which is the core of this system. Project cost management classifies and counts the procurement cost of each project material, and conducts cost accounting according to each type of work category, classifies and counts the procurement cost of each project material, analyzes the profit and loss, finds the reason, implements the responsibility, reports to the procurement supervisor, and assists in the cost of materials Intermediate control, auditing the quantity of completed projects (checking acceptance quantity and delivery note quantity). Warehouse management. The warehouse clerk is responsible for the management of incoming and outgoing materials, including materials entering the site, picking, returning, and reporting damage. This functional module is used by the warehouse keeper to assist the warehouse keeper to complete the work more conveniently and quickly. Warehouse clerk can use this function through Android smart phone terminal or PC browser. The Android smart phone terminal has its own camera function to facilitate the shooting of construction and warehouse site photos. The photos and the record report formed by the system form a data package, which is transmitted to the server via Wi-Fi or 3G network.
Balanced Optimization System of Construction Project Management
173
(3) Demand analysis of supplier management Supplier management includes operations such as adding, searching, modifying and deleting suppliers, as well as managing the information of materials supplied by a certain supplier. (4) Analysis of procurement requirements Purchasing management includes operations such as adding, searching, modifying and deleting purchases, as well as managing all payment information for a certain purchase. 2.2 Analysis of System Non-functional Requirements Non-functional requirements are another type of requirements that are different from functional requirements, and the two manifestations are also different. Functional requirements are generally presented through language or graphics, while non-functional requirements have quantitative indicators that can be presented through tables. Some indicators are listed in the table, and these indicators can be used to measure the characteristics of the system. These indicators stipulate the level and quality of services that the system can provide, and certain constraints that the system must comply with. Although non-functional requirements have nothing to do with the specific functions used by the user, they seriously affect the user’s experience of using these functions. Non-functional requirements generally include the availability, performance, reliability, scalability, and supportability of the system. This article uses quantitative index requirements to describe the non-functional requirements of the system to ensure its accuracy and testability. System availability index is one of the most important indexes in non-functional requirements. Because the availability index directly reflects the applicability of system functions. For a system that does not meet user needs, no matter how high other indicators are, it is meaningless. 2.3 Improved Particle Swarm Algorithm The method used in this article is quick sort to construct a non-dominated set. The relevant steps are as follows: (1) Select an individual in the population S; (2) Compare other individuals in the population with this individual. At this time, the particle is divided into two parts, one is dominated by the individual, and the other is the dominating individual or is not related to the individual; (3) If the individual is not dominated by other individuals in the population, it means that the individual is a non-dominated solution and will be placed in the non-dominated set at this time, otherwise it will not be placed; (4) Repeat the above process until S is empty. The advantages of the non-dominated set constructed by this method are as follows: the population at the beginning of each algorithm is the set Si composed of the dominating individuals or individuals not related to the individuals found in the previous generation, rather than the entire population. Therefore, in the algorithm comparison, the comparison
174
Y. Wang
range is narrowed, which is beneficial to improve the overall running speed of the algorithm. The specific expression is as follows: xi (t + 1), if xi (t + 1) < Pb (t) Pb (t + 1) = (1) Pb (t), if xi (t + 1) > Pb (t) Judgment of particle pros and cons: In the same grid set, the pros and cons of the solutions need to be judged. What we need to do is to keep the better solutions and delete the redundant inferior solutions. The non-dominated solutions are different from the Pareto frontier, but are closer to the real Pareto frontier. Those solutions are the better solutions that we need, and the more distant solutions are the inferior solutions. Therefore, the crowded distance method is used to judge the pros and cons of this part of the non-dominated solution, and (Sti ) is set as the distance between the non-dominated solution and the real Pareto front, and the following formula is applied: Sit = Posti − Postfront(i) Delete the calculation of the number of particles in the grid: When Mt+1 > M, trim the external set, the trimming formula is: Mt+1 − M M = mod × grid (k, m) + 0.5 Mt+1
(2)
(3)
Where grid(k, m) is the number of particles in the k-th grid in the n-dimensional target space.
3 Research Experiment on Balance Optimization System of Construction Project Management Based on Improved Particle Swarm Algorithm 3.1 Parameter Setting MaxIt = 500; %Maximum number of iterations nPop = 100; % population number nRep = 50; % storage number 0 = 0.6; % inertia weight q = 1; % individual learning factor C2 = 2; % global learning parameters nGrid = 7; % the number of grids in each dimension alpha = 0.1; % expansion rate beta = 2; % Leader selection pressure gamma = 2; % delete selection pressure mu = 0.1; % probability of mutation
Balanced Optimization System of Construction Project Management
175
3.2 Data nVar = 18; % number of decision variables VarMin = [12,10,18,18,15,38,12,21,19,10,13,14,19,18,19,11,2,1]; % minimum completion time VarMax = [15,15,22,21,16,47,15,25,211,15,18,22,20,22,15,7,3]; % longest completion time 3.3 Initialize the Population Individuals in the initial population are generally randomly generated. The position of each particle randomly generated in the initial state is the optimal solution. 3.4 Iterative Update When there is a dominant relationship between the individual particle leader and the new particle, the individual leader of the particle is determined according to the dominant relationship. In this paper, the method of roulette is used to determine the individual guide of the initial particles. The method used in this article is roulette to get the initial leader of the individual particles. In the iterative process, this article considers the situation of mutation. The mutated particle is randomly selected. If the mutated particle can form a non-dominated solution, the particle is reversed. 3.5 Judging the Dominance Relationship Between Particles According to the principle of Pareto optimality, the non-dominated set is updated by judging the dominating relationship among particles, and finally the set of all optimal solutions is obtained.
4 Research and Experiment Analysis of Balanced Optimization System for Construction Project Management Based on Improved Particle Swarm Algorithm 4.1 Stress Test The system was subjected to 3000, 5000, and 7000 visit stress tests to verify the gradient performance of the system for different user visits. And select the core function point of the construction project management balance optimization system based on the improved particle swarm algorithm in the actual application process: the frequency of use and the highest system performance requirement: the full-text retrieval of research information as the test sample of the performance test.
176
Y. Wang Table 1. Stress test statistics 60S 120S 180S 240S 300S Virtual users
30
Response time
17.5 34.6
Number of system processing 12
Virtual users
Response time
50 11
70
90
120
42.8
51.3
68.7
13
12
11
Number of system processing
250
User number
200 150 100 50 0 60S
120S
180S
240S
300S
Test period Fig. 1. Stress test statistics
As shown in Table 1 and Fig. 1, through the analysis of the stress test chart, when the access volume of the system increases or decreases, the concurrent response of the system will actually increase step by step. The response time is also good. The actual response time is no more than 3 s, which can fully meet the requirements. From the side, it can be verified that the system can fully meet the user’s access needs. Analyzing the number of processing per second of the system, the actual processing situation will not change due to the increase of the CPU utilization of the system, and the performance is good. 4.2 Platform Processing Performance For this system, the system processing transaction performance was tested, and the results are shown in Table 2. It can be seen from Fig. 2 that the average value of Transaction per second is 65.21, which shows that the number of transactions per second processed by the system can well simulate real information query use cases. The average transaction response time is 84.21 s, which is too long. But if the peak response time of 170 s is not considered, the system transaction processing time will eventually stabilize between 30 s, which meets the performance requirements.
Balanced Optimization System of Construction Project Management
177
Table 2. Corresponding time for querying physical objects Elapsed 0:00 01:00 02:00 03:00 04:00 05:00 06:00 07:00 08:00 09:00 10:00 scenario time mm/ss Average response time
0
30
62
112
163
146
42
31
30
31
30
Average response time
Average response time 200 150 100 50 0 0:00
1:00
2:00
3:00
4:00
5:00
6:00
7:00
8:00
9:00 10:00
-50
Elapsed scenario time mm/ss Fig. 2. Corresponding time of system transaction processing
5 Conclusions The balanced optimization system of construction project management based on improved particle swarm algorithm realizes a series of functions such as cost management, material procurement, warehouse management and so on. The Web platform of the system adopts an improved particle algorithm to deploy applications on the server. The client only needs to install a browser to manage the system, avoiding the trouble of future system upgrades. The warehouse management function adopts the C/S system structure and runs on the Android smart phone terminal, which can assist the warehouse clerk to complete the work more conveniently and quickly. Construction project management balance optimization system based on improved particle swarm algorithm. The web platform adopts the lightweight J2EE Struts2+spring+iBatis multi-layer architecture, which realizes the separation of the presentation layer, business logic layer, and persistence layer, which facilitates development And the maintenance process, so that the system has better scalability and maintainability.
178
Y. Wang
References 1. Cajzek, R., Klansek, U.: Cost optimization of project schedules under constrained resources and alternative production processes by mixed-integer nonlinear programming. Eng. Constr. Archit. Manag. 26(10), 2474–2508 (2019) 2. Baek, Y.S., Lee, S., Filatov, M., et al.: Optimization of three state conical intersections by adaptive penalty function algorithm in connection with the mixed-reference spin-flip timedependent density functional theory method (MRSF-TDDFT). J. Phys. Chem. A 125(9), 1994–2006 (2021) 3. Li, N., Wang, S.: Pricing options on investment project expansions under commodity price uncertainty. J. Ind. Manag. Optim. 15(1), 261–273 (2019) 4. Davydova, T., Arsen’Ev, Y., Shelobaev, S.: Project and program management with optimization of their functioning. Econ. XXI Century Innov. Invest. Educ. 7(1), 38–45 (2020) 5. He, W., Shi, Y., Kong, D.: Optimization model calculation of construction cost and time based on genetic algorithm. IOP Conf. Ser. Earth Environ. Sci. 242(6), 062044 (7 pp) (2019) 6. Chengke, W., Chunjiang, C., Rui, J., et al.: Understanding laborers’ behavioral diversities in multinational construction projects using integrated simulation approach. Eng. Constr. Archit. Manag. 26(9), 2120–2146 (2019) 7. Meneylyuk, A., Nikiforov, A.: Optimization of managerial, organizational and technological solutions of grain storages construction and reconstruction. Tehniˇcki Glasnik 14(2), 121–134 (2020) 8. Filimonova, L.A., Ckvopcova, H.K.: Optimization models in project management through operating CASH FLOWS. Voprosy Regionalnoj Ekonomiki 1(42), 120–131 (2020) 9. Joshi, D., Mittal, M.L., Sharma, M.K., et al.: An effective teaching-learning-based optimization algorithm for the multi-skill resource-constrained project scheduling problem. J. Model. Manag. 14(4), 1064–1087 (2019) 10. Abhilasha, P., Kumar, T.K., Neeraj, J.K.: A qualitative framework for selection of optimization algorithm for multi-objective trade-off problem in construction projects. Eng. Constr. Archit. Manag. 26(9), 1924–1945 (2019) 11. Mascaraque-Ramirez, C., Para-Gonzalez, L., Marco-Jornet, P.: Management of a ferry construction project using a production-oriented design methodology. J. Ship Prod. Des. 35(4), 309–316 (2019)
Study on the Rain Removal Algorithm of Single Image Junhua Shao(B) and Qiang Li Research Institute, Lanzhou Jiaotong University, Lanzhou 730070, Gansu, China
Abstract. In this paper, we propose a new method to remove raindrop and rain streak from a single image. We know that raindrop and rain streak on the image can severely hamper the visibility of a background scene and degrade an image considerably. By analysing the rainy image, we conclude that raindrop and the rain streak are the nature noise, at the same time, we find that compared to the background scene, the spectrum of the raindrops and rain streaks are mainly the high-frequency component, especially to the rain streaks. So we propose a new method to remove raindrop and rain streak from a single image. First we use the median filter to filter the salt and pepper noise in the nature noise. Secondly, we use a group of low-pass filters to filter the high-frequency component of the image. Thus, we remove a lot of raindrops and rain streaks on the image, but after filtering, there is something like mist or fog on the image, then we use the haze removal method to remove them. Finally, we use the sharpening technology to show off details of the image. In the end, we use our method to experiments, we find the result is significantly, so we can conclude our approach is effectiveness and quantitatively. Keywords: Rainy image · Gaussian filter · Bilateral filter · Median filter · Convolution complete
1 Introduction With the development of science technology, image process plays more and more important role in all kinds of fields. The technology of image process includes image impression, image enhancement, image recognition etc. Rain removal is an important image process technology, but it is very difficult. In reality, raindrops or rain streaks will occlude the real image information, such as occluding road situation, people situation, which maybe influence the case investigation. So it has realistic meaning for removing raindrops and rain streaks on the image. There are mainly four methods to remove the rain on the image: (1) Rain removal modal based on physical modal only, (2) Rain removal method based on image process, (3) Rain removal method based on sparse code and dictionary learning and classifier, (4) Rain removal method based on deep convolution neural network [1]. The paper’ Deep Joint Rain Detection and Removal from a Single Image’ in 2017 propose a method to © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 179–186, 2022. https://doi.org/10.1007/978-981-16-5857-0_22
180
J. Shao and Q. Li
remove the rain streaks [2]. The paper ‘Attentive Generative Adversarial Network for Raindrop Removal from A Single Image’ approve a new method in 2018, its main idea is to use visual attention into both the generative and discriminative network give a method by using attentive generative adversarial network [3]. They need very large data set for all above methods, but it is very difficult to make rainy image set, because it is unlikely to get the rainy image and no-rain image at the same time. Up to now, the rainy images are always get by computer simulation, so there is some difference between simulation rainy image and real image. This paper propose a new method to remove raindrops or rain streaks from image. Our experiments show the effectiveness and quantitatively of our approach.
2 Analysis of Rainy Image Raindrops and rain streaks on the image are noise. Analysing large number of rainy image data, we can get the conclusion that raindrops and rain streaks on the image are the independent random noise, the noise includes the salt and pepper noise, Gaussian noise etc. We also find it is mainly the high-frequency component. Select two rainy images through the high pass filter, we can get the Fig. 1.
Fig. 1. a: Raw rainy image, b: High-frequence image of raw rainy image
This paper proposes the method which uses a series of low pass filter to process the rainy image. The important information of the rainy image must be reversed as more as possible and the image must be clear and best visual after the rainy image is filtered. After above operation, we use haze removal method and image sharpening method to make the image more clear and edge sharpness.
3 Selection of Filters 3.1 Basic Filter First, we use the median filter to filter the salt and pepper noise on the image. The median filter is a statistical and sorting filter. Usually, it is a sliding window with odd nodes. For
Study on the Rain Removal Algorithm of Single Image
181
the node(i,j) in an image, the median filter selects it and the nodes around it and makes these nodes as a set, then sorts the values of the nodes in the set and selects the median value as the value of the node(i,j). There are several sliding window shape can be selected, such as line shape, square shape, round shape, cross shape etc. In this paper, we select 3 × 3 square shape as our sliding window. Its principle is shown as Fig. 2.
Fig. 2. Principle of 3 × 3 median filter
As shown in the above figure, we know the median filter is good at filtering out the larger pixels values and the smaller pixel values, so it is useful for filtering salt and pepper noise. According to the correlation between neighbor pixels [4, 5], we need to make some relevance operations between the raindrop or rain streaks and the around environment by defining a convolution kernel, which can make up for the position of raindrops or rain streaks by the neighbor pixels, thus we use the mean filter to do this. The principle of the mean filter is shown as Fig. 3. For one pixel (x,y), select a convolution template which is made up of many pixels around the pixel (x,y), then calculate the average value of the pixels in the convolution template and assign theaverage value to current pixel (x,y). We call this value as g(x,y), we can see g(x,y) = f(x,y)/m, the m is the number of pixels in the convolution template.
Fig. 3. Principle of mean filter
3.2 Selection of the Gaussian Filter After above operation, we can filter out many raindrops and rain streaks, but there still some residual. In order to filter out the high-frequency component further, we use Gaussian filtering to do it. Because the Gaussian filtering can keep more details and less blur of the image compared to mean filter. The principle of the Gaussian filtering is shown as formula (1). H (x, y) = e−D
2 (x,y)/2σ 2
G(x, y) =
1 − x22 22 e 2σ 2π σ 2
(1)
182
J. Shao and Q. Li
In formula (1), D(x,y) is the distance from the centre frequency, σ is the degree of deviation compare to the centre frequency. The process of Gaussian filtering is to scan each pixel in the image with a convolution kernel, then let the neighbor pixels in the image multiply the corresponding values of the convolution kernel [6], at last sum all the products. In image process, the Gaussian filtering can be achieved by the following two method, the one is the discrete window sliding convolution, the other is the Fourier transform. In this paper, we use the discrete window sliding convolution. The Gaussian kernel is used in discrete window sliding convolution. We can calculate the Gaussian kernel from formula (1), the size of the Gaussian kernel is an odd number, such as 3, 5, 7, 9. In this paper, we use the Gaussian kernel with size 3 × 3. The Gaussian kernel with size 3 × 3 is shown as Fig. 4.
Fig. 4. The Gaussian kernel with size 3 × 3
We can get the corresponding value template when all the coordinates are substituted into the Gaussian function G, through following calculation, we can get the Gaussian kernel. For above template, there are two forms. One is the decimal template, and the other is the integer template. Decimal template is the template whose value is the original value which is calculated with the Gaussian function G directly, a 3 × 3 decimal template is shown as Fig. 5. Integer template is the normalization value template, first, normalize the value of top left hand corner in the decimal template as 1, then the rest values in the decimal template divide the value of top left hand corner in it, and round down the above result, thus we get the integer template, a 3 × 3 integer template is shown as Fig. 6.
Fig. 5. 3 × 3 decimal template
Study on the Rain Removal Algorithm of Single Image
183
Fig. 6. 3 × 3 integer template
After that, multiply the value which is the sum of all values of the integer template, thus we get the Gaussian kernel, the 3 × 3 Gaussian template is shown as Fig. 7.
[
Fig. 7. 3 × 3, σ = 0.8 Gaussian template
σ is the standard deviation of Gaussian distribution, it represents the discrete degree of the data, the smaller σ, the larger value of the centre of the template and the smaller values of its around; the smaller σ, the poorer of the image smooth. Gaussian filter is placing two-dimensional Gaussian distribution on the image matrix for convolution operations. It considers the space distance relationship between neighbor pixel values. The rainy image is the three-channel colorful image, it should be seperated into single-channel image: red channel, blue channel and green channel, and then they be filtered by Gaussian filtering separately, and merged finally. 3.3 Bilateral Filter After above operation, there is still some residual. We use nonlinear bilateral filter to filter them, which not keep the image edge, but reduce noise and smooth the image. As well as other filtering, bilateral filter is the weighted average filter, its weighted average is based on Gaussian distribution. The weight of bilateral filter considers not euclidean distance between pixels, but gray value difference between pixels. It is a combination of Gaussian filter and Alpha-Trimmed mean Filter. Gaussian filter only considers euclidean distance between pixels, its template coefficient decreases with the window enlarging; AlphaTrimmed mean filter only considers gray value difference between pixels, it calculate the average of the rest value after trimming the α% largest value and the α% smallest value. Distance template of bilateral filter is generated by 2-dimension Gaussian distribution, its formulation is shown as formula (2). Value template of bilateral filter is generated by 1-dimension Gaussian function, its formulation is shown as formula (3). d (i, j, k, l) = exp(−
(i − k)2 + (j − l)2 ) 2σi2
(2)
184
J. Shao and Q. Li
In formula (2), (k,l) is the centre coordinate of template window; (i,j) is the other coefficient of template window; σ is the standard deviation of Gaussian function. r(i, j, k, l) = exp(−
f (i, j) − f (k, l)2 ) 2σr2
(3)
In formula (3), f(x,y) is the pixel value of dot (x,y); (k,l) is the centre coordinate of template window; (i,j) is the other coefficient of template window; σ is the standard deviation of Gaussian function. The template of bilateral filter is the product of distance template and Value template, its formulation is shown as formula (4). w(i, j, k, l) = d (i, j, k, l) ∗ r(i, j, k, l) = exp −
(i − k)2 + (j − l)2 2σi2
−
f (i, j) − f (k, D)2
2σr2
(4) 3.4 Haze Removal and Sharpening Compared to the background image, raindrops or rain streaks are more lighter. So after above filtering, there is something like mist or fog on the image, so we use the ‘Single Image Haze Removal Using Dark Channel Prior’ to remove them [7]. The fog image modal is shown as formula (5). I (x) = J (x)t(x) + A(1 − t(x))
(5)
In formula (5), I(x) is the image with fog, J(x) is the image without fog, A is the atmospheric light, t(x) is atmospheric refraction rate. Change the form of formula (5), formula (6) can be get. J (x) = (I (x) − A)/t(x) + A
(6)
The image will be blur after above operation, so we need to sharpen the image in order to make the edge of the image more clear. The sharpen technology can keep the image background and make the edge of the image clearer. The final result is to highlight the small details of the image under the premise of preserving the image background. The sharpen kernel is shown as Fig. 8. We apply the following 3 × 3 sharpen kernel to the image.
Fig. 8. 3 × 3 sharpen kernel
Before sharpening, we need to change the color space of the image to YUV space, because RGB color space is a recessive combination of the light and color of pixel, which can not do any convolution on the image light. Thus, we do remove lots of rain on the image and do not use the method of deep learning, so we needn’t prepare data set. Which is more easily.
Study on the Rain Removal Algorithm of Single Image
185
4 Experiment We put our method to the experiment to observe its result as Fig. 9.
Fig. 9. Raw image ((3,3),3) Gaussian kernel ((5,5),3) Gaussian kernel ((9,9),3) Gaussian kernel
5 Conclusion Analysing above result, we find for different image and different rain type, the effect is different. The smoother the background image, the larger the convolution template should be and the better the result. If the rain is rain streak, the convolution template should be smaller.
References 1. Four main method about raindrop removal from a single image. https://blog.csdn.net/gaotih ong/article/details/79077881 2. Yang, W., Tan, R.T., Feng, J., Liu, J., Guo, Z., Yan, S.: Deep joint rain detection and removal from a single image. arXiv:1609.07769v3 [cs.CV], 13 Mar 2017
186
J. Shao and Q. Li
3. Qian, R., Tan, R.T., Yang, W., Su, J., Liu, J.: Attentive generative adversarial network for raindrop removal from a single image. arXiv:1711.10098v4 [cs.CV], 6 May 2018 4. Xu, M., Ding, Y.D., et al.: Algorithm of scratch removal about neighborhood pixels correlation based on 5 multiply 5 field. J. Shanghai Univ. (Nat. Sci. Ed.), May 2018 5. Zhang, J.: Algorithm of image in painting for neighbor pixels and image denoising based on marginal point. Comput. Eng. Des. (13) (2009) 6. Image Kernels. https://setosa.io/ev/image-kernels 7. He, K.M., Sun, J., Tang, X.O., et al.: Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011)
Basketball Action Behavior Recognition Algorithm Based on Dynamic Recognition Technology He Li(B) Physical Education Institute, Bohai University, Jinzhou 121000, Liaoning, China
Abstract. Behavior recognition is an important research topic in the field of artificial intelligence and computer vision. In daily life, smart devices with behavior recognition capabilities are widely used in video retrieval, video surveillance, human-computer interaction and other fields, and they play an important role in the landscape. This paper studies the concept, function and application of dynamic recognition technology. At the same time, the basketball action recognition is summarized, and the saliency detection algorithm is proposed. The results show that the ucf-101 data set of 25 iterations has the largest proportion, 23.4%; the hmdb-51 data set of 10 iterations has the smallest proportion, 23.6%. As the number of iterations increases, the accuracy continues to improve. Finally, based on stability, the number of iterations basically no longer increases. At the same time, considering the issue of training time, the final number of iterations is selected as 10 times. Keywords: Dynamic recognition technology · Basketball action · Behavior recognition algorithm · Significance detection algorithm
1 Introduction With the continuous improvement of people’s material living standards, artificial intelligence furniture, home appliances and other high-tech electronic products based on computer vision are increasingly close to daily life. As an important research field of computer vision, human behavior recognition has also received extensive attention. With the continuous progress of science and technology, many experts have studied the behavior recognition algorithm. For example, some teams in China have studied object detection in video surveillance and provided systems and/or technologies to regulate and/or regulate individual behavior while playing video games. The system includes mechanisms and/or patterns for identifying physical and/or psychological activities similar to those performed by game characters and suitable for individual health or mental abilities. It requires individuals to perform selected activities in the process of video game execution, monitor individual activities, reproduce individual behaviors in the process of performing selected tasks, and associate them with game characters. This paper describes a method to determine the strength of interaction with computer programs. The method and apparatus include capturing an image of the capture area, identifying an input object © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 187–194, 2022. https://doi.org/10.1007/978-981-16-5857-0_23
188
H. Li
in the image, identifying an initial value of a parameter of the input object, capturing a second image of the capture area, and identifying a second value of a parameter of the input object. This parameter identifies one or more shapes, colors, or brightness of the input object and is affected by the manual operation of the input object. Calculate the variation range of the parameter, that is, the difference between the second value and the first value. The activity input is provided to the computer program, and the activity input includes an intensity value indicating the degree of change of the parameter. The recognition algorithm of basketball in complex background is designed. A feature selection method based on ant colony algorithm is proposed to accelerate the extraction of mobile user behavior recognition signal. The sample data is preprocessed, and the features are optimized according to the classification sensitivity of different behavior features, so as to reduce the dimension of feature search space. Combining ant colony algorithm with neural network classifier, the feature classification accuracy is optimized twice. Some experts have studied the method of moving object tracking, and introduced gram Schmidt orthogonalization method to effectively save the high-dimensional space in the original space. This paper proposes a monocular video human pose tracking and recognition algorithm, which uses video features combined with 3D motion capture data to model the parameters of human body parts. Firstly, the structure of 3D data projection constraint graph is defined. In order to simplify the reasoning process, a constraint graph spanning tree construction algorithm and a balance algorithm are proposed. Combined with the proposed function mechanism, constraint graph spanning tree and metropolis Hastings method, the human motion in monocular video is tracked and recognized, and the 3D motion parameters are derived. Using Markov chain Monte Carlo (MCMC) and constraint mapping, a data-driven online human behavior recognition algorithm is proposed. This paper proposes a behavior recognition algorithm based on block matrix, which mainly includes six abnormal behaviors, such as jumping, accelerating running, falling, squatting, waving and winding. Firstly, the frame difference method is used to extract the contour features of moving objects in the video stream, and the mathematical morphology method is used to process them. Then, two-dimensional linear discriminant analysis is used to extract contour features. Finally, the nearest neighbor classifier is used for template matching classification. An algorithm for solving the geometric and temporal distribution of STIPs is proposed [1, 2]. Some experts have studied the data preprocessing of sensor and network, and proposed an improved GG clustering algorithm to solve this problem. The algorithm determines the optimal number of clusters by the compactness and separation between clusters. In order to improve the accuracy of behavior recognition, a clustering behavior descriptor model is established, and the human daily behavior recognition in a small range is studied in the application environment of ASUS extension pro real-time infrared camera. Firstly, the attitude features that need to be classified are extracted by depth image analysis, which requires a large number of samples. Then the multi-layer perceptron model is used to train and classify the collected samples. Finally, real-time recognition of the human body state collected by the camera [3]. Although there are many achievements in the research of behavior recognition algorithm, there are still some deficiencies in the research of basketball action behavior recognition algorithm.
Basketball Action Behavior Recognition Algorithm
189
In order to study the recognition algorithm of basketball action, this paper studies the dynamic recognition technology and recognition algorithm of basketball action, and establishes the RGB skin color model. The results show that the dynamic recognition technology is conducive to the development of basketball action recognition algorithm.
2 Method 2.1 Dynamic Identification Technology (1) The concept of dynamic identification technology Dynamic recognition technology is to train AdaBoost classifier offline and compare with training samples [4]. AdaBoost algorithm is applied to behavior detection, which improves the accuracy of behavior detection to a certain extent. However, its disadvantage is that with the increase of the number of samples, the training time of the classifier becomes longer, the detection speed is slow, and the detection accuracy has a great relationship with the number of samples [5]. Dynamic recognition technology is applied to wearable human-computer interaction system. The system takes fingertips as feature matching points, and uses binocular stereo vision technology to identify the spatial coordinates of fingertips [6]. The experimental results show that the error rate between the spatial location information obtained by the system ranging algorithm and the actual location information is very small. However, the ranging accuracy of the system is general, and the fingertip feature extraction and matching accuracy need to be improved. (2) Application of dynamic identification technology Dynamic recognition technology is widely used in sports training. By capturing the details of athletes’ movements, it is convenient for targeted training guidance and improves the competitive level of athletes [7]. In the field of medicine, gait analysis can be used to judge the rehabilitation of fracture patients and provide reference for future treatment. At the same time, gait analysis can also be used to monitor some bone and joint diseases. Compared with the normal gait, the leg lesions can be found as soon as possible, which is convenient for timely treatment [8]. In addition, motion analysis can be used as the basis of recognition. By analyzing the human gait behavior in video, we can provide evidence for police locking the suspect. [9]. (3) The role of dynamic identification technology The research of human behavior recognition mainly includes three core parts: data preprocessing, human behavior feature extraction and description, human behavior category judgment. By inputting the video frame image sequence into the network model, the temporal information and spatial feature information of human behavior are extracted, so as to complete the recognition of human behavior [10]. It can be said that human behavior recognition has many fields. The behavior of human body not only reflects the behavior state of human body language, but also reflects people’s psychological state. Through intelligent human-computer interaction, we can judge and recognize whether the user’s eyes are dull, and also collect facial expression, body shape and other information, so that the computer can better understand all kinds of information in the external world [11]. In addition, the core technology of dynamic recognition technology is behavior recognition, which is different from the
190
H. Li
traditional game operation relying on mouse, keyboard and other external devices. Dynamic recognition technology mainly uses human behavior to control, and uses camera to obtain video stream, so as to recognize human behavior [12]. 2.2 Basketball Action Behavior Recognition Algorithm (1) Summary of basketball action behavior recognition For complex scenes such as basketball court, because the foreground moving objects occupy a large space relative to the field of vision, the distance between players is close, and it is easy to be blocked, the foreground detected by background subtraction is conglutinated. When the image processing algorithm based on player number recognition is used to recognize the moving target, if the player’s back is facing the camera, the target cannot be recognized. Therefore, in view of the particularity of basketball game, the above method cannot effectively identify multiple moving targets, so it is very difficult to calibrate the position of a single moving target. Moreover, the background of the basketball court is complex, and the stand is close to the court, so the audience is easily affected by the static background in the video and appears at the front desk. Many methods based on background subtraction or background subtraction cannot effectively obtain the player’s target, and the tracking effect is not ideal. (2) Saliency detection algorithm The algorithm needs to capture the background image manually when there is no foreground moving target. The principle of the algorithm is simple, but there are many problems. This method is time-consuming and laborious. In many cases, in some scenes, such as high-speed entrance vehicle detection system, if there is no foreground moving object, shadow and clutter background exist for a long time, the moving object detection method based on subsequent background subtraction cannot get the ideal foreground object. 2.3 RGB Skin Color Model Through this algorithm, the color histograms of R, G and B channels are obtained, and the average value is obtained to obtain a group of data. Then, by analyzing the hand video images under different illumination and different distance, new data are obtained. Calculate the average value and variance of each channel of multiple groups of data, and get the range of R, G, B channel value. The calculation method is shown in formula (1): n R (1) R = 1 n The color values of the three channels obtained many times are described as threedimensional coordinates in three-dimensional space. After excluding some color regions with large deviation, these color values will gather in a spherical region. Finally, the constraint condition of R, G and B channels is Eq. (2): (R − r1 )2 + (B − b1 )2 + (G − g1 )2 r 2
(2)
Basketball Action Behavior Recognition Algorithm
191
Because the skin color model is mainly for color, only the value of H channel is considered in the experiment. By adjusting the threshold value of H-channel and observing the change of image, the more suitable threshold value is obtained as Eq. (3): 7 H 29
(3)
Suppose a hyperplane divides a set of samples into two classes. For a two-dimensional plane, the hyperplane is a straight line. The hyperplane equation can be written as Eq. (4): wT x + b = 0
(4)
3 Experience 3.1 Extraction of Experimental Objects The open standard data set and the video capture module of basketball behavior recognition system are selected to verify the algorithm. According to the different importance of each joint in action recognition, the model gives different weights to the action. A large number of experiments show that the set of some joint points is related to a specific action, while the set of other joint points is related to another specific action. For example, playing basketball, mainly with the elbow, wrist and knee joint points are closely related, while with the head joint point is very small. The observation of leg joint points is mainly used to complete the judgment and recognition of running. 3.2 Experimental Analysis First, you need to wear a colored fingertip. In order to get the fingertip region from the image, the algorithm needs to analyze the color of fingertip. Since the color of the fingertip is bright, RGB color space is converted to HSV space as the description space of color. In the window, the fingertip area is framed by rectangle, and the color histogram of hue, brightness and saturation is calculated. The value of a certain range near the average histogram is taken as the threshold value to get the fingertip area. The candidate areas of human body parts were predicted; then a dense connectivity graph is generated, in which each candidate area is regarded as a node, and the association between nodes is regarded as the edge weight of each node. This makes it an optimization problem: the same class represents the same target component, and everyone acts as a separate class. The overall idea of this algorithm is to minimize the constraints on joint targets based on integer linear programming, and divide the initial candidate limb set into a consistent set corresponding to different human parts.
4 Discussion 4.1 Determination of Iteration Times In order to prove the importance of iteration times in the system experiment, the iteration times of the algorithm for model training are determined. The iteration times of 10 times,
192
H. Li Table 1. Performance results under different iterations Iterations UCF-101 HMDB-51 10
12.6%
23.6%
15
22.3%
13.6%
20
14.9%
16.5%
25
23.4%
16.7%
15 times, 20 times and 25 times are adopted respectively, and the learning divergence and a, B and C tasks are increased, as shown in Table 1. From the above, the proportion of ucf-101 data set with 10 iterations is 12.6%, and hmdb-51 data set is 23.6%; the proportion of ucf-101 data set with 15 iterations was 22.3%, and hmdb-51 data set was 13.6%; the proportion of ucf-101 data set with 20 iterations was 14.9%, and hmdb-51 data set was 16.5%; the proportion of ucf-101 data set with 25 iterations is 23.4%, and hmdb-51 data set is 16.7%. The results are shown in Fig. 1. 25.00%
data
20.00% 15.00% 10.00% 5.00% 0.00% 10 UCF-101
15 HMDB-51
20
iterations
25
Fig. 1. Performance results under different iterations
Thus, 25 iterations of ucf-101 data set accounted for the largest proportion, 23.4%; The proportion of hmdb-51 data set with 10 iterations is the smallest, which is 23.6%. The results show that with the increase of the number of iterations, the accuracy is improved. Finally, based on stability, the number of iterations does not increase, so the number of iterations does not increase. At the same time, considering the problem of training time, the final number of iterations is 10. 4.2 Experimental Results The clipping box size is set to a fixed value of 368 × 368, which is consistent with the attitude estimation network input. Repeat the same operation for several other people to get the action classification video of several people in this game. In order to increase
Basketball Action Behavior Recognition Algorithm
193
the data set, we also flip the video horizontally, because there are two sides in the game, mainly the left and right images. If we flip the video horizontally, we can double the data and increase the data set. Finally, the number of videos obtained is shown in Table 2. Table 2. Statistics of basketball action video collection quantity Pass the ball
Pitching
Catch the ball
Lay up
Total
Train
53
43
76
56
273
Test
21
21
56
34
132
From the above, the number of training passes in the video is 53, and the number of test passes is 21; the number of training pitches in the video is 53, and the number of test pitches is 21; the number of training catches in the video is 53, and the number of test catches is 21; the number of training layups in the video is 53. The results are shown in Fig. 2. 300
pass the ball Pitching Catch the ball Lay up total
250
data
200 150 100 50 0 train
type
test
Fig. 2. Statistics of basketball action video collection quantity
To sum up, the number of test layups is 21; The total number of training in the video is 273, and the total number of tests is 132. The experimental results show that the tracking algorithm can track and locate the target in real time without complex scene, and the tracking speed can reach more than 100 FPS. However, when there is severe occlusion or people move too fast (that is, two video frames change too much), it will follow the wrong object.
5 Conclusion With the continuous progress of computer technology, dynamic identification technology has become an indispensable technology in people’s life. This paper presents a new
194
H. Li
method of ball carrier detection and behavior recognition in basketball match. Aiming at the characteristics of cluttered background and low resolution of player image in basketball game video, the covariance descriptor is used to fuse multiple visual features of players, which is described as a point on Riemannian manifold and mapped to tangent space by homeomorphism mapping. Therefore, the ball carrier detection and behavior recognition can be completed simultaneously in tangent space by trained multi class LogitBoost.
References 1. Du, M.: Mobile payment recognition technology based on face detection algorithm. Concurr. Comput. Pract. Exp. 30(22), e4655.1–e4655.9 (2018) 2. Bu, X.: Human motion gesture recognition algorithm in video based on convolutional neural features of training images. IEEE Access PP(99), 1 (2020) 3. Li, J., Gu, D.: Research on basketball players’ action recognition based on interactive system and machine learning. J. Intell. Fuzzy Syst. 40(2), 2029–2039 (2021) 4. Xiao, Q., Song, R.: Action recognition based on hierarchical dynamic Bayesian network. Multimed. Tools Appl. 77(6), 6955–6968 (2017). https://doi.org/10.1007/s11042-017-4614-0 5. Zhang, Y.M., Chang, F.L., Liu, H.B.: Action recognition based on 3D skeleton. Tien Tzu Hsueh Pao/acta Electronica Sinica 45(4), 906–911 (2017) 6. Su, B.Y., Jiang, J., Tang, Q.F., et al.: Human dynamic action recognition based on functional data analysis. Zidonghua Xuebao/Acta Automatica Sinica 43(5), 866–876 (2017) 7. Fan, X., Hu, S., He, J.: A dynamic selection ensemble method for target recognition based on clustering and randomized reference classifier. Int. J. Mach. Learn. Cybern. 10(3), 515–525 (2017). https://doi.org/10.1007/s13042-017-0732-2 8. Al-Shargie, F., Tariq, U., Alex, M., et al.: Emotion recognition based on fusion of local cortical activations and dynamic functional networks connectivity: an EEG study. IEEE Access PP(99), 1 (2019) 9. Rekik, G., Khacharem, A., Belkhir, Y., et al.: The instructional benefits of dynamic visualizations in the acquisition of basketball tactical actions. J. Comput. Assist. Learn. 35(1), 74–81 (2019) 10. Bullock, G.S., Arnold, T.W., Plisky, P.J., et al.: Basketball players dynamic performance across competition levels. J. Strength Cond. Res. 32(12), 3528–3533 (2018) 11. Haddadin, S., Krieger, K., Albu-Schaffer, A., et al.: Exploiting elastic energy storage for “blind” cyclic manipulation: modeling, stability analysis, control, and experiments for dribbling. IEEE Trans. Robot. PP(1), 1–22 (2018) 12. Muhuri, S., Chakraborty, S., Setua, S.K.: Differentiate the game maker in any soccer match based on social network approach. IEEE Trans. Comput. Soc. Syst. PP(99), 1–10 (2020)
Simulation of Land Use System Performance Dynamics Based on System Dynamics Yunzhou Liu(B) School of Lanzhou, University of Technology, Lanzhou, Gansu, China
Abstract. The current land use system performance in China is an important reflection of the land use situation in China, while the optimization of land use structure is the basis and core of the overall land use planning prepared by governments at all levels, and the land use system performance is also an important indicator reflecting the current situation of land use. The degree of economic development, industrial restructuring and land use system performance are inextricably linked. In the face of the accelerating growth of urban population and urban land, the scarcity of land has become more prominent, and it is urgent to carry out and deepen the study of land resource utilization performance, so it is extremely important to continue the study of land use system performance for urban development. A model based on system dynamics is developed to simulate the land use performance dynamically. Keywords: Land use system · System dynamics · Land use performance · Dynamic simulation
1 Research Background The performance of land use system is a comprehensive manifestation of land use benefits, effects and impacts due to different degrees of utilization and utilization methods under the premise of effective input to the land belonging to it, and it is the effectiveness of an institutional arrangement for urban land use [1]. In the context of accelerating urbanization, along with economic development and industrial structure adjustment, the contradiction between urban land resource supply and socio-economic development is becoming more and more significant [2]. A reasonable urban land use performance simulation can effectively alleviate the widespread phenomenon of crude utilization of construction land in existing cities, promote the efficient and reasonable utilization of urban land resources and urban construction and its sustainable development [3]. The limited nature of land resources and the infinite nature of social demand objectively require the optimal allocation of regional land use structure [4]. Only a reasonable land use structure can maintain the virtuous cycle of land ecosystem, improve the maximum efficiency and effectiveness of land use, make the sustainable use of land resources and promote the regional economy to the maximum extent development [5]. Therefore, © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 195–207, 2022. https://doi.org/10.1007/978-981-16-5857-0_24
196
Y. Liu
optimization of land use structure has been the focus of government attention and a hot spot of research by experts and scholars [6]. The study of land use system performance has guiding significance for urban land planning. (1) At a time of rapid economic development and urgent adjustment of industrial structure, research on land use system performance is conducive to rational urban layout and more optimal structure, so that land can bring better economic, social and ecological benefits. (2) It is conducive to providing feedback on urban land use conditions, revealing the types and spatial differences of urban land use efficiency, and thus proposing ways and measures to further improve urban land use efficiency [7]. (3) Through an in-depth study of land use system performance, it is conducive to the establishment of a land use performance assessment system, the formulation of land use approaches and management measures at the macro level, the scale of cities and the optimization mode of land use. (4) It helps to transform the management mode of rough land development and management, and gradually transform it to the direction of intensive utilization, laying the foundation for determining a more reasonable development mode and providing a basis for predicting the objective demand for land in the process of expansion and modernization [8]. Figure 1 shows the research idea diagram of this paper.
Fig. 1. Research ideas for simulation of land use system performance dynamics
2 System Dynamics Modelling of Land Use System Performance 2.1 SD Flow Rate System Establishment When conducting system dynamics modeling, a comprehensive analysis of land us system performance should be conducted to accurately locate the indicator factors that can reflect the performance of the land use system [9]. It is determined that the indicator factors mainly originate from the four major indicator layers of policy, society [10], economy and environment, and based on the above classification, the indicators reflecting the performance are analyzed layer by layer in combination with the land use classification [11]. From the steps of model building, all variables are classified into four types of variables: flow position, flow rate, auxiliary variables and constant variables according to the type of variables in the model [12], and all variables are reasonably classified to clarify the scope of all variables included or belonging to [13], and reasonably avoid the possible duplication of indicator representation, which is used to guide the next step - cause-effect relationship The next step is to draw a cause-effect diagram [14].
Simulation of Land Use System Performance Dynamics
197
2.2 Performance System Analysis In the indicator analysis of the performance system of the land use system, the three major indicator layers of social, economic and environmental are considered, so when building the SD model, the whole system is divided into three subperformance systems, namely, social subperformance system, economic subperformance system and environmental subperformance system. The flow rate flow rate tables for the performance of the land use systems involved in this paper are presented in Table 1. Table 1. Land use system performance flow level flow rate Variable type
Variable name
Flow
Total population, residential land, construction land, public administration and public service land, commercial land, industrial, mining and storage land, transportation land, social fixed asset investment, agricultural land, forest land, cropland, grassland, other agricultural land, wetland, unused land, social fixed asset investment
Rate
Change in total population, change in residential land, change in construction land, change in public administration and public service land, change in commercial land, change in industrial, mining and storage land, change in transportation land, change in social fixed asset investment, change in agricultural land, change in forest land, change in arable land, change in grassland, change in wetland, change in unused land, change in other agricultural land
Instrumental variables
Per capita income, GDP of primary, secondary and tertiary industries, investment in primary, secondary and tertiary industries, GDP, Investment ratio of primary, secondary and tertiary industries, natural population growth rate
Constant variable
Birth rate, death rate, migration rate, rate of change in transportation land, rate of change in residential land, rate of change in public administration and public service land, rate of change in commercial land, rate of change in social fixed asset investment
2.3 Performance Structure Analysis Building the Mathematical Model. List the main model system equations involved. (1) (2)
(3)
Total population = net population density identical land, unit: 104 people. Social fixed asset investment = INTEG (+ increase in social fixed asset investment, initial value), unit: 108 people. GDP of primary industry = GDP of primary industry per land of construction land × construction land/10000, unit: 108 .
198
Y. Liu
(4)
Primary industry investment = primary industry investment ratio × social fixed asset investment/100. (5) Investment in science and technology innovation = proportion of investment in science and technology innovation × investment in tertiary industry/100. (6) Total coal consumption = unit GDP coal consumption × GDP/10000 + per capita coal consumption × total population × 10. (7) Arable land = INTEG (+ increase of arable land − decrease of arable land, initial value), unit: hm2 . (8) Growth of construction land = growth of industrial, mining and storage land + growth of residential land, unit: hm2 . (9) Growth of transportation land = growth of transportation land per capita × total population, unit: hm2 . (10) Construction land = INTEG (+ growth of construction land, initial value), unit: hm2 . The causality diagram constituted by each indicator and stock in this paper is shown in Fig. 2 below, which shows the relationship between the stocks.
Fig. 2. Cause-and-effect diagram
Simulation of Land Use System Performance Dynamics
199
Cause Tree and Usage Tree. Figures 3, 4, 5, 6, 7, 8, 9 and 10 below show the important usage trees involved in the model, including Fig. 3 construction land stock tree and usage tree, Fig. 4 Agricultural land stock cause tree and usage tree, Fig. 5 Unused land stock cause tree and usage tree, Fig. 6 total population stock cause tree and usage tree, Fig. 7. GDP stock cause tree and usage tree, Fig. 8. Primary industry investment cause tree and usage tree, Fig. 9. Sencondary industry investment cause tree and usage tree, Fig. 10. Tertiay industry investment cause tree and usage tree. Increase in Commercial Land Increase in Industral and Mining Storage Land Increase in Other Construction Land Increase in Public Administration and Public Service Land
Increase in Construction Land
Increase in Residential Land
Construction Land
Increase in Transportation Land
Fig. 3. Construction land stock tree and usage tree
Arable Growth Reduction of Arable Land Garden Increase Grassland Growth Increase in Other Agricultural Land
Arable Land Garden Grassland
Agricultural Land
Other Agricultural Land
Forest Land Growth
Woodland
Fig. 4. Agricultural land stock cause tree and usage tree
Arable Land Garden Grassland
Agricultural Land
Other Agricultural Land Unused Land Woodland Amount of Change in Unused Land Increase in Construction Land
Construction Land
Wetland Growth
Wetlands
Fig. 5. Unused land stock cause tree and usage tree
200
Y. Liu Migration Rate Natural Population Growth Rate
Population Growth
Total Population
(Total Population)
Fig. 6. Total population stock cause tree and usage tree Agricultural Land
GDP of Primary Industry
Industrial and Mining Storage Land GDP of Secondary Industry
GDP
Sencondary Industry Investment GDP of Tertiay Industry
Fig. 7. GDP stock cause tree and usage tree Garden Increase Grassland Growth
Garden Grassland
Increase in Other Agricultural Land Increase in Investeringen in Sociale Vaste Activa (Increase
in
Other
Investeringen in Sociale Vaste Activa Agricultural
Primary Industry Investment
Other Agricultural Land
Wetland Growth
Wetlands
Forest Land Growth
Woodland
Fig. 8. Primary industry investment cause tree and usage tree Increase in Construction Land
Construction Land
Sencondary Industry Investment
Fig. 9. Sencondary industry investment cause tree and usage tree Increase in Commercial Land Increase in Construction Land Increase in Investeringen in Sociale Vaste Activa
Commercial Land Construction Land
Tertiay Industry Investment
Investeringen in Sociale Vaste Activa
Fig. 10. Tertiay industry investment cause tree and usage tree
3 Simulation of Land Use System Performance Dynamics 3.1 Analysis and Testing of Model Results After the system dynamics model is established, the model needs to be tested to determine the degree of conformity between the model and the real situation to ensure the realism
Simulation of Land Use System Performance Dynamics
201
and validity of the model. Commonly used methods for testing system dynamics models include intuitive and operational tests, and historical tests. Visual and Operational Tests. The reasonableness of the model is tested by the equation test and the magnitude test function that comes with Vensim software. The test results show that the equations are consistent on both sides of the equation, and the trial run of the model does not produce pathological results. Therefore, the simulation model constructed in this paper is reasonable and suitable for simulating the performance of the land use system, and the data obtained have credibility. Historical Test. Three years of data 2015–2017 were used to bring into the model for model veracity testing, and the simulated and true historical values and errors of the 2017 SD model are shown in Table 2 below. Table 2. 2017 SD model history test table. Variable
Actual numerical value
Numerical simulation
Deviation/%
Total population
275.73
278
Construction land
31850
31790
−0.19%
Social fixed asset investment
5841560
5840306
−0.02%
GDP
4077.5
4077.5
0.00%
Agricultural land
1349221.53
1349220.83
0.00%
Transportation land
5385
5485
1.82%
Residential land
8186
8112
−0.91%
Land for public administration and public services
6378
6322
−0.89%
Commercial land
1215
1314
7.53%
Industrial storage land
9520
9380
−1.49%
Other construction land
1166
1177
0.93%
Garden land
1865.93
1800
−3.66%
Woodland
172997
172666
−0.19%
Grassland
396376
394000
−0.60%
Arable land
777523
783133
0.72%
Other agricultural land
459.6
459.6
Wetland
272576
270200
−0.88%
Unused land
594501
593841.4
−0.11%
0.82%
0.00%
The simulation history test table was analyzed and the simulation results showed that the relative error rate of the system model does not exceed 10%, which is within the
202
Y. Liu
error allowance [15]. This indicates that the simulation results of the system dynamics model of land use system performance are reliable and meet the modeling requirements, and can be used to simulate the state of land use system performance as well as the trend of change, and can be used to conduct simulation experiments by adjusting key parameters.
3.2 Dynamic Simulation After the historical test, the values of each stock are simulated to derive the simulated values for 2018–2022, and the land use system performance is simulated, as shown in Table 3. According to the influence of land use structure optimization, the area of construction land, forest land, grassland, cropland, wetland and unused land are defined as y1 , y2 , y3 , y4 , y5 , y6 . (1) The economic efficiency objective function is f (yi ) =
6
ki yi
(1)
i=1
Where, is the output value per unit area of each type of land, million yuan/hm2 , now set as the average value for many years, measured = 2.7924, 0.0019, 0.0074, 0.0018, 0.0011, 0. Simulated values are shown in Table 4 below. (2) Social benefits are the simulated values of per capita income from 2016–2020 as follows Table 5. (3) The objective function of ecological benefits is f (yi ) =
3
zi yi
(2)
i=1
In this paper, we take the data measured in literature [16] and set the green equivalent of forest land as 1.0, green equivalent of grassland as 0.34, green equivalent of cropland as 0.29, and green equivalent of other types of land as 0. The simulated values are shown in Table 3 below. Tip: Green equivalent refers to the “green amount” with considerable ecological functions, green equivalent needs to be measured by an ecosystem type, land with green equivalent mainly includes forest land, arable land and grassland, wetland has the functions of landscape, regulation of atmospheric composition and air purification, etc., is the amount of implied green equivalent, difficult to quantify, towns and It is difficult to quantify the amount of green equivalents, while urban, industrial, mining and storage land, transportation land and other construction land and unused land are land without green equivalents. The simulated values of economic, social and ecological benefits for each stock in the model from 2018 to 2022 are shown in Tables 4, 5 and 6.
Simulation of Land Use System Performance Dynamics
203
Table 3. Simulated 2018–2022 values for each stock Variable
Actual numerical value 2018
2019
2020
2021
2022
269.095 265.019 261.006 257..053 253.161 Total population
3054310 3054430 3054550 3054690 3054840
Construction land
5545570 5214850 4903860 4611410 4336410
Social fixed asset investment
2865.14 2619.89 2395.62 2190.56 2003.05
GDP
1349240 1348590 1347980 1347410 1346900
Agricultural land
6014.55 6145.67 6279.65 6416.54 6556.42
Transportation land
7828.39 7794.73 7761.21 7727.84 7694.61
Residential land
5848.55 5778.36 5709.02 5640.51 5572.83
Land for public administration and public 1448.29 1519.25 1593.7 services
1671.79 1753.71
Commercial land
8140.15 8188.17 8236.48 8285.08 8333.96
Industrial storage land
1036.62 1009.25 982.606 956.665 931.409
Other construction land
1768.66 1747.63 1726.85 1706.31 1686.01
Garden land
172449
172338
172227
172116
172006
Woodland
391846
390922
390000
389080
388162
Grassland
782446
782689
782933
783177
783420
Arable land
733.222 894.358 1090.91 1330.65 1623.08
Other agricultural land
268503
267714
266927
266143
265361
Wetland
593572
593572
593504
593435
593366
Table 4. Simulation of the economic benefits of each stock from 2018–2022 Unit: million Variable
Economic benefits 2018
2019
2020
2021
2022
Construction land
8528855.24
8529190.33
8529525.42
8529916.35
8530335.21
Woodland
8528855.24
8529190.33
8529525.42
8529916.35
8530335.21
Grassland
327.65
327.44
327.02
326.81
Cropland
2899.66
2892.82
2886
2879.19
2872.39
Wetland
1408.40
1408.84
1409.27
1409.71
1410.15
8533786.31
8534113.92
8534825.04
8535236.47
Economic benefits
327.2313
8534441.55
204
Y. Liu Table 5. Simulation of the social benefits of each stock from 2018–2022 Unit: billion yuan per 10,000 people Variable
Ecological benefits 2018
2019
2020
2021
2022
Per capita income 12.9938 12.7043 12.4211 12.1443 11.8737 Social benefits
12.9938 12.7043 12.4211 12.1443 11.8737
Table 6. Simulation of the ecological benefits of each stock from 2018–2022 Units: green equivalent Variable
Ecological benefits 2018
2019
2020
2021
2022
Woodland
172449
172338
172227
172116
172006
Grassland
133227.64
132913.48
132600
132287.2
131975.08
Cropland
226909.34
226979.81
227050.57
227121.33
227191.8
Ecological benefits
532585.98
532231.29
531877.57
531524.53
531172.88
3.3 Simulation Results Analysis The simulation results show that the population declines, the decrease of grassland and wetland area from 2018 to 2022, and the economic benefits rise, but the ecological and social benefits both decline. The population decline is mainly due to the increasing migration rate, and the decrease of agricultural land such as grassland and wetland is mainly from the continuous expansion of construction land, so the economic benefits rise in the next five years. The simulated values of economic, social and ecological benefits from 2018–2022 are shown in Figs. 11, 12 and 13. People usually pursue economic benefits at the expense of ecology, and the simulated values in this paper fully illustrate this point. Based on the comprehensive consideration of economic benefits, social benefits and ecological benefits, the land use system should be actively adjusted accordingly to enhance land vitality and add momentum to urban development. According to the requirements of sustainable socio-economic development and local natural, economic and social conditions, the overall strategic layout and arrangement of land development, utilization, management and protection in space and time should be made in an integrated manner, and the structure and layout of land use should be reasonably adjusted with all the land in the region as the object from the perspective of overall and long-term interests; with utilization as the center, the development, utilization, improvement and protection of land should be The purpose is to strengthen the macro control of land use. The purpose is to strengthen the macro control and plan management of land use, rationalize the use of land resources, and promote the coordinated development of the national economy.
Simulation of Land Use System Performance Dynamics
205
Fig. 11. Economic benefit change chart 2018–2022
Fig. 12. Social benefit change chart 2018–2022
Fig. 13. Ecological benefit change chart 2018–2022
4 Conclusion According to the writing of all four chapters of this paper, the requirements of the research purpose are basically completed. The results of the land use system performance simulation in this paper basically match with the actual situation, indicating that the simulation process always constructs a scientific and reasonable simulation index system,
206
Y. Liu
applies a practical evaluation method, and can use the simulation as a reference object to carry out the land use system performance simulation research of the same type of cities, and obtains the following conclusions. (1) This thesis provides a reasonable classification of the land use system, which mainly includes four types of land: construction land, agricultural land, wetland and unused land. The construction land includes residential land, commercial land, transportation land, public administration and public service land, industrial, mining and storage land and other construction land, while the agricultural land includes forest land, cropland, grassland, garden land and other agricultural land, ensuring that all land types are covered and the land use system is comprehensively studied. (2) The performance of land use system in this paper is imitated into three major subperformance systems, and simulated and analyzed from economic, social and environmental aspects, and the real model data are simulated through dynamic simulation of economic subperformance system, social subperformance system and ecological subperformance system from 2018 to 2022 to make a reasonable analysis of land use system performance. (3) According to the simulation data results, by 2022, the economic, social and ecological benefits are: 8,535,236,479,000 yuan, 1,187,370,000 yuan/person and 531,172,88 green equivalent, respectively, and the economic benefits in the land use system performance during 2018–2022 have increased significantly, and the social and ecological benefits have decreased. (4) After analyzing the simulated values of economic performance, ecological performance and social performance through dynamic simulation of the land use system performance model, it is not difficult to find that the land use system with sloppy management can bring more benefits to the city through reasonable land use planning. The rapid change of urban population and the speed of urban economic development cannot meet the people’s growing demand for a better life, so the industrial structure should be adjusted as soon as possible, and the old road of old cities with depleted resources should not be followed, and the vitality of cities should be enhanced to maintain the basic requirements for urban development.
References 1. Liu, L.: Analysis of problems and countermeasures of agricultural development. Heilongjiang Sci. Technol. Inf. 31 (2008) 2. Xie, L.: Research on optimization of land use structure. Harbin Inst. Technol. 3 (2012) 3. Wu, Y.: Urban land use performance evaluation and obstacle diagnosis based on improved TOPSIS model. Soil Water Conserv. Res. 85 (2015) 4. Zhang, J.R., Wang, Z.D., Tang, L., et al.: Comprehensive analysis of land use planning based on system dynamics–Wuxi city as an example. China Manag. Sci. (3), 1–8 (2016) 5. Wu, Y.: Evaluation of urban land use performance. Northeast Agricultural University, Master of Science in Management, p. 3 (2015) 6. Lyle, J.: Design for Human Ecosystems. Van Nostrand Reinhold Company, New York (1985) 7. United Nations Conference on Environment Development (UNED): Agenda 21. Report of the United Nations Conference on Environment and Development, July 1992
Simulation of Land Use System Performance Dynamics
207
8. An International Framework for Evaluating Sustainable Land Management. World Soil Resources Report 73, Rome 9. James Brown, H., Ding, C.: International experience and lessons in urban land management. Foreign Urban Plan. 20(1), 21–23 (2005) 10. Dang, G.: Seeking the performance of property rights arrangement for rural state-owned land. China Rural Econ. (1), 50–54 (1996) 11. Li, Z.: A comprehensive evaluation method for urban land use performance. Urban Plan. (08), 62 (2000) 12. Aruhan, Li, B.: Analysis and evaluation of the current situation of land use in Ikezhao League. Arid Zone Res. Environ. 13(Suppl.), 62–65 (1999) 13. Jiang, H., Ma, H., Tang, J.: Study on the current status of land use in the Zhaotong area. J. Southwest For. Coll. 19(2), 101–104 (1999) 14. Zhou, F., Pu, L.J., Peng, T.Z.: Land use change and its performance analysis in Su-Xi-Chang region. J. Nat. Resour. 21(3), 392–398 (2006) 15. Wang, Q.: System Dynamics (2009 Revised Edition). Shanghai University of Finance and Economics Press, Shanghai (2009) 16. Zang, L.: Optimization of Land Use Structure in Western Jilin. Jilin University, Changchun (2010)
Problem Student Prediction Model Based on Convolution Neural Network and Least Squares Support Vector Machine Yan Zhang1 and Ping Zhong2(B) 1 Graduate School, China University of Geosciences, Wuhan 430074, Hubei, China 2 College of Marine Science and Technology, China
University of Geosciences, Wuhan 430074, Hubei, China
Abstract. Problem students are an unavoidable special group in the education system. How to educate and guide problem students is a key link that affects the harmonious and stable operation of students themselves, families, schools and even society. Early judgment and early intervention can effectively reduce the occurrence of accidents, but the traditional way of student management does not pay enough attention to the psychological status of problem students. Commonly, the students are educated after the problems appear, and the psychological intervention of problem students is not timely. Based on the data of learning behavior, life behavior and psychological behavior in the campus big data environment, this paper designs an early warning model of problem students based on convolution neural network and least squares support vector machine, which makes up for the lack of current student work and has strong practicability and operability. Keywords: Big data · Alert · CNN · LSSVM
1 Introduction With the influx of college enrollment, the increase of social competition, the severe employment situation and many other factors, college students’ psychological problems, employment pressure increases, interpersonal tension, social practice ability is poor, difficult to adapt to live independently, they often deviate from the normal development trajectory in terms of thought and behavior, There have been repeated personal misbehaviors, and some have even gone to extremes [1]. Judging from the increasing number of cases reported in the news, the problem students are a common phenomenon in colleges now [2]. According to statistics, 20%–25% of college students have difficulty in adapting, 3%–5% of college students behave abnormally, 0.5%–1% of college students abnormal [3]. The existence of these problems brings great pressure not only to higher education, but also to the new challenges for educators [4]. However, the traditional student management style does not pay enough attention to the psychological status of these students, resulting in the lack of timely psychological intervention [5]. Some students who have been abnormal for a long time, but did not © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 208–217, 2022. https://doi.org/10.1007/978-981-16-5857-0_25
Problem Student Prediction Model Based on Convolution Neural Network
209
realize that these belong to the psychological problems, so they do not ask for psychological help [6]. At the same time, 40.5% of the students perceived that they had abnormal problems and asked for help, but underestimated the severity of their problems which would eventually harm their growth, development and even life [7]. In the modern higher education management, the level of information technology has been increasing year by year, with the widespread use of campus cards and the accumulation of data of major business systems over the years, forming a campus big data environment [8]. These data often have large-scale, multi-type, high-speed, low-density value several characteristics, the traditional analytical means cannot be well analyzed, how to use big data analysis methods, explore the correlation between students’ study, life and psychology with the help of big data analysis method, dig out abnormal data of students, and make early judgment and early warning for problem students [9]. It is critical to reduce accidental injury due to psychological problems [10].
2 Data Collection At present, the construction of informatization will be as an important component of campus construction, the pursuit to the construction of campus digitization and wisdom campus, with the increasing application system, the data accumulation expands rapidly which has initially formed a typical campus big data environment. Students of all kinds of behavior such as learning behavior, life behavior and psychological behavior data will be saved by the corresponding system. 2.1 Learning Behavior Data Learning data focus on students’ learning, and the analyzed data includes information such as course selection system data, examination system data, etc. Students’ behavior data are generally collected in the curriculum as a unit, according to the general process of traditional teaching can be divided into four categories: pre-class preparation, inclass learning, after-class discussion, curriculum assessment, from the perspective of behavioral science theory and constructivism learning theory, the behavior of students in each link is closely related. Among them, the pre-class preparation link is mainly reflected in the students’ attention to the course, in general, when students pay attention to a course, they will show concern about the opening time of the course, the notice of the opening of the course. In-class learning which is the process of learning the course and resources, mainly reflected in the student’s learning input, which is reflected in the learning time, the number of learning, the cumulative learning time, the cumulative number of learning and other behavioral indicators. After-school discussion session, to forum access, posting, replying as a consideration indicator, the course assessment link is mainly on the performance of the job completion.
210
Y. Zhang and P. Zhong
2.2 Living Behavior Data (1) Daily routine The theme focuses on the analysis of students’ daily life, pay attention to the student’s life situation, including access data, bath time, Internet time, specific analysis of students get up, home, bath, surf the Internet and other life details, depict ingesting the student’s life curve. (2) Consumption habits Focus on the flow of students’ funds, data analysis objects include canteen consumption data, convenience store consumption data and electricity charge information, mainly the location of student consumption analysis, draw the hot spot map of student life, related analysis of students’ amateur life situation. (3) Amateur life Focus on students’ extracurricular life, data analysis objects include the second classroom, a card consumption place, student information table, through the student consumption place combined with the location of the surrounding facilities correlation analysis, to judge the students’ amateur life hobbies, analysis the data of the second classroom, focus on students interests which to provide the quality of teaching. 2.3 Thought Behavior Data From the students’ moral behavior, psychological dynamics are mainly two aspects. (1) Moral behavior data Focus on the students’ behavior in school, and analyze the data for library credit points, teachers evaluation, reward information, social practice activities, the student behavior during in the library and so on. Analyzes the morality of students by such data. (2) Psychological data Focus on the analysis of students’ psychological problems, data analysis object for canteen consumption data, complaint mailbox information, counselor feedback information, such as students during a period of psychological fluctuations are relatively to irregular eating, can be through the canteen consumption data can be early warning, in addition to the student id card consumption record long-term consumption without classmates, We can focus on the psychological isolation of students dynamic, with the analysis of counselor feedback information, to ensure the development of students’ mental health.
3 SVM Early Warning Model In dealing with classification problems, especially in the case of high dimensional classification, if the training samples obtained are small in quantity and cannot meet the basic requirements of the classifier, it is difficult for traditional classifiers such as Bayes, neural networks or linear discrimination to achieve good results. It is therefore urgent to
Problem Student Prediction Model Based on Convolution Neural Network
211
find a more appropriate approach. In recent years, support vector machine (SVM) is an algorithm that is active in the field of pattern recognition and machine learning, which is based on statistical learning. The use of support vector algorithm can greatly avoid many traditional problems in learning methods. 3.1 SVM Fundamentals The basic idea of supporting vector machine: First, in the case of linear sepsis, the optimal classification hyperplane of two types of samples is found in the original space. In the case of linear indivisible, the relaxation variable is added to the analysis, and the sample of low-dimensional input space is mapped to the high-dimensional property space by using nonlinear mapping to make it linear, which makes it possible to analyze the nonlinearity of the sample by linear algorithm in the high-dimensional property space, and the optimal classification is found in the feature space. Secondly, it builds the optimal classification hyperplane in the property space by using the principle of structural risk minimization, so that the classifier gets global optimization, and the expected risk in the whole sample space meets a certain probability. Support vector machine in dealing with the more common nonlinear problems will adopt a kind of thinking called nuclear function, through the introduction of nuclear function can effectively avoid the traditional classification problem of “dimensional disaster” crisis, and it is also the key to transform the optimal classification surface of linear separable to the support vector machine of nonlinear separable As shown in Table 1, the process can be simply described as the resulting raw data, first to extract its features, and then loaded into the input space, if the original data is not a linear separable problem at this time, then by introducing a nuclear function to map it to a high-dimensional feature space, the characteristics of the sample is linearly separable in the high dimensional feature space. 3.2 Data Preparation From various systems such as enrollment, reporting, education, campus card consumption, student information, attendance results and other collection, many data sources cover the students’ family economic situation, social conditions, consumption rules, consumption quota, academic performance and so on. After obtaining these data information, because the student data recorded in the various column tables is not consolidated, and the data is mostly log data in its original form is not meaningful, can not apply this type of data to the actual model, which is not conducive to research, so we need to data processing as available data variables, It is also necessary to correct the inconsistent data, remove the lost values in the data, ensure the rationality and integrity of the data, and make the model results more accurate. In the process of actual data processing, we use Spark, python and other large data processing tools to organize the raw data statistics into available data information, the data presented in this paper is the result of the extraction, basically can meet the requirements of stand-alone operation processing, the following is some of the important variables of the process (Fig. 1):
212
Y. Zhang and P. Zhong
Initial Data
Load
Input Space
Kernel Map
Higher Dimensional Eigenspace
Fig. 1. Support vector machine classification process diagram
3.3 Preliminary Results Student data related to the experiment is extracted, 32 dimensions of data are extracted from learning behavior, life behavior, and thought behavior, and a variable of the four grades student object in a college is selected as a training sample, and a 32-dimensional training data matrix is classified as a problem student (Yes is 1, no is 0) Machine learning using the SVM algorithm. First, we carry out the characteristic stake of the trained student data, and obtain a classification model through sample training in order to determine the behavior during the test phase. During the training process, the positive sample is not a problem student, indicating that the student does not have too many problems in academic and ideological aspects, and the negative sample is the problem students who need to focus on, indicating that students have certain pressure in learning and thinking, that is, there are abnormal behavior in psychology. The optimization problem is solved by a linear classifier based on the SVM algorithm: 1 (1) max w s.t. yi wT xi + b ≥ 1 (i = 1, . . . l) The coefficients and intercepts of the obtained hyperplane are shown in Table 1. Table 1. Hyperplane coefficients and intercepts X1
8.5295e−10 X12 4.1337e−10 X23
8.1531e−10
X2
4.8642e−10 X13 4.7402e−8
X24
6.6605e−10
X3
8.8272e−9
X14 1.7106e−10 X25
7.1652e−10
X4
3.5827e−10 X15 3.7797e−9
X5
4.7069e−10 X16 3.8179e−10 X27
5.9060e−9
X6
4.8353e−10 X17 3.8784e−10 X28
4.4821e−10
X7
4.8591e−9
3.5226e−10
X8
1.9522e−10 X19 7.3495e−10 X30
8.1345e−9
X9
6.6734e−10 X20 7.5711e−8
X31
0.4368e−10
X21 7.0284e−10 X32
2.9409e−10
X10 6.7389e−8
X26
X18 1.5142e−10 X29
0.8073e−10
X11 4.2653e−10 X22 6.8424e−10 Intercept 0.93876
Then the test data is imported, and the results of the SVM hyperplane classification are utilized. During the test, the test student data is entered into the trained classifier to
Problem Student Prediction Model Based on Convolution Neural Network
213
determine whether the student is abnormal based on the training results. In the course of the experiment, only the statistical student data is entered, the system automatically determines whether the student has an abnormal. In order to evaluate the classification results, an evaluation index is introduced, that is, the correct recognition rate. Next, test the classification accuracy of the classifier, take the other student data different from the training sample as the test sample, the test data will be entered into the classifier obtained by our previous training classification, to be classified after the output and the real situation to be compared to determine whether the classification is correct. It is rare to study the problem student’s decision using The SVM classifier. The reason for considering the reason may be that the student data does not have high commercial value, but the study of problem student judgment is of great significance to the student management staff. If we can make early prediction on the problem students based on the known data, perhaps saved for the student management staff to more time to help students solve the problem and to give students more care. The error of the results is considered for the following reasons: (1) The collected data is not detailed enough, coupled with a certain degree of lack of data, resulting in the accuracy of each student’s details. (2) There are some special cases that affect the classification results, which is more obvious when the amount of data is small. 3.4 Model Improvements 3.4.1 Convolutional Neural Network (CNN) Convolutional neural network is a method of deep learning, which is widely used in image recognition, classification, medical diagnosis, target detection and other fields. The structure of typical convolutional neural network is similar to that of ordinary neural network, consisting of layers of neurons connected according to a certain structure. It includes input layer, convolution layer, subsampling (pooling) layer, full connection layer and output layer. In the convolution layer, a learning convolution check is used to convolute the feature map of the upper layer to get the output feature map. In the lower sampling layer, the input feature image is divided into multiple non overlapping n * n image blocks. According to the actual requirements, the sum, average or maximum value of the pixels in each image block is calculated, so that the output image is reduced by N times in both dimensions. Convolution layer and subsampling layer embody the idea of local connection and weight sharing. The full connection layer pulls all the feature maps into one-dimensional feature vectors for input, and the output layer outputs the classification results. The typical convolution neural network structure is shown in the figure below (Fig. 2): Convolution neural network has the following advantages compared with general neural network: (1) The unique network structure enables the convolution neural network to process the input original image, avoiding the complex preprocessing operation. (2) The implicit learning feature avoids the explicit feature extraction.
214
Y. Zhang and P. Zhong
Fig. 2. Convolution neural network structure
(3) Convolutional neural network has fewer parameters, simple structure and can be learned in parallel. But convolution neural network also has the following problems: (1) The convergence speed of convolutional neural network is slow. (2) It needs huge computing resources. (3) Large amount of samples are needed to support, and the classification effect is not ideal in the case of small samples. 3.4.2 CNN-LSSVM Model In order to solve some of the above questions to achieve more accurate classification, This paper propose a classification method based on CNN-LSSVM. CNN-LSSVM algorithm combines the advantages of convolutional neural network and least squares support vector machine. Convolutional neural network as a feature extractor avoids the explicit artificial feature extraction, implicit learning from training samples, the original image can be directly input into convolutional neural network for processing, avoiding the complicated image preprocessing steps of traditional feature extraction algorithm. Compared with the traditional support vector machine, the least squares support vector machine introduces the concept of interval when constructing the optimal decision function, and skillfully uses the kernel function of the original space to replace the complex operation of the high latitude feature space. In the process of optimizing the target, the loss function is selected to avoid the wrong data to some extent, in this case the optimization problem is to satisfy the formula: 1 ξi2 inJ (w, ξ ) = wT w + c 2 i=1 s.t. yi (ϕ(xi ) · w + b) ≥ 1 − ξi i = 1, . . . l l
(2)
Problem Student Prediction Model Based on Convolution Neural Network
215
CNN-LSSVM model combines convolutional neural network and least squares support vector machine to form a new network structure for classification. The structure of CNN-LSSVM model is shown in Fig. 3. The collected student data is constructed into a matrix, which is used as the input of convolution neural network for feature extraction, and then input into LSSVM to realize classification. The algorithm flow chart is shown in Fig. 4 (Table 2).
Convolution1
LSSVM1
Subsampling1 LSSVM2
Sigmoid
Output . . .
Input Convolution2
LSSVMn
Subsampling2
Fig. 3. Structure of CNN-LSSVM model
Input the collected data into the model and get the results in the table above. From the results, it can be concluded that this classification method has a relatively large improvement over the ordinary SVM method based on linear classification, especially when the accuracy rate of classification of senior students has been significantly improved. After the above two methods to classify students, We found that the second method, SVM based on the least square method, had a better classification effect and could make a preliminary judgment on students, but the accuracy of junior students still need to be improved. The reason for the low accuracy may be that the amount of student learning and behavioral data in the low-level test data is not rich enough to reduce accuracy, and that accuracy may be too affected by a single student due to the low test data, and the next stage needs to be done in the hope of finding or designing the right nuclear function. This is to consider whether the non-linear SVM classification algorithm can be viewed more accurately.
216
Y. Zhang and P. Zhong
Design CNN structure Input feature vector into LSSVM Establish CNN Input test data
Start
End
Input Training data test results Extract feature vector
Fig. 4. Algorithm flow chart
Table 2. Recognition rate after improved algorithm Freshman
Sophomore
Junior
Senior
Students have problems and predict problems
74.8.7%
69.2%
77.3%
58.7%
Normal students and predict normal
81.5%
75.5%
80.5%
84.8%
4 Conclusion The early judgment and warning of the students in the university are very important to reduce the accidental injury caused by psychological problems. Traditional problem students’ judgment indicators do not have objectivity, and often with lag. Intervention is usually too late which will lead to greater costs and losses, and seriously affect the effect of intervention. With the development and application of big data technology, accurate, real, timely and effective big data sample information will bring about new changes in early warning and intervention. In the big data environment, colleges should take poor students, freshmen, graduates, lovelorn and other groups or individuals with poor interpersonal relationships and abnormal behavior as the key early warning object, and establish an early warning index system with monitoring functions, including individual development status, social environment, interpersonal communication, negative emotion, etc. according to the situation of the early warning object. Then the school information platform as the carrier set up early warning information subsystem, early warning analysis subsystem and early warning signal subsystem, and then with the help
Problem Student Prediction Model Based on Convolution Neural Network
217
of the campus id card system, student dormitory access system, library lending system, network access information system and other data and information related to the mental health of college students, Through the evaluation model, the psychological condition, behavior activity, psychological consciousness and so on are grasped dynamically, and the possibility of crisis is accurately predicted, so as to eliminate the crisis. This paper designs the early warning mechanism of students from the three aspects of learning, behavior and psychology, carries on a comprehensive diagnostic warning for the problems students, and makes up for the lack of early warning concern for students’ work, which has a strong practicality and operability. But there are some shortcomings. Mainly reflected in the following two aspects: First of all, there are deficiencies in the data collection part, the identification of students need to collect a specific large amount of data, the data collection standards set in this study may miss some data items, because the current information system records of the data is not perfect, the data collection part considered in the design of the research mechanism may be omitted. Secondly, the design and implementation of the student early warning mechanism is a systematic project, which involves all aspects of the content, in order to improve the prediction effect and quality based on this research, it is very important build a large data analysis system of student behavior, analyze and summarize the behavior and characteristics of students, put forward constructive reference for the relevant departments to analyze, in order to analyze the characteristics of students behavior, guide the healthy growth of students.
References 1. Qi, W.: Research on the innovative mechanism of mental health education for college students. Sci. Educ. Cult. Collect. (Late Period) (05), 155–158 (2021) 2. Zhu, W.: Analysis of the causes of mental health problems of impoverished college students in tourism universities. Contemp. Tour. 19(14), 78–79 (2021) 3. Li, L.: Problems and solutions of mental health education for college students in the new era. Psychol. Mon. 16(09), 213–214 (2021) 4. Su, S., Jiang, L.: The causes of college students’ psychological problems and campus prevention. Family Sci. Technol. (05), 63–64 (2021) 5. Zhao, J.: Based on the characteristics of college students’ psychological problems and the analysis of mental health education countermeasures. Chin. J. Multimed. Netw. Teach. (First Issue) (05), 245–248 (2021) 6. Yin, F.: Exploration of mental health education for college counselors. J. Xinzhou Normal Univ. 37(02), 108–110+123 (2021) 7. Gao, Y.: Research on college students’ employment psychological problems and countermeasures under the background of employment pressure. Public Stand. (08), 93–95 (2021) 8. Sun, J., Lu, X.: The application of “problem-leading” teaching in college students’ mental health education curriculum. For. Teach. (04), 95–99 (2021) 9. Wang, T.: Research on the mental health of college students based on the construction of mental safety—comment on “mental health of college students.” Chin. J. Saf. Sci. 31(04), 192 (2021) 10. Guo, P.: Problems and countermeasures in mental health education for college students. Educ. Inf. Forum (02), 97–98 (2021)
Analysis of the Effect of Self-repairing Concrete Under Sulfate Erosion Considering the Rectangular Simulation Algorithm Lijun Zhang(B) and Shanshan Deng Chongqing Telecommunication Polytechnic College, Chongqing 402247, China
Abstract. Cement concrete is an in homogeneous and brittle material. Under the influence of the surrounding environment, it is easy to cause internal irregular or incoherent tiny cracks. Relying on the rectangular simulation algorithm, this paper tries to select the initial width of the crack and the external environmental conditions (temperature) as variables, and uses the flexural strength recovery rate to characterize the repair effect under sulfate erosion. Practice shows that the smaller the initial width of the crack, the better the repair effect; the repair effect increases with the increase of external environmental conditions (temperature), which shows the effectiveness of this method in the evaluation of self-healing concrete repair effect. Keywords: Self-healing effect · Rectangular simulation algorithm · Sulfate erosion · Environmental conditions
1 Introduction With the continuous development of my country’s social economy, the demand for concrete in the development of industry has also continued to rise [1–3], but because cement concrete is a non-homogeneous and brittle material, under the influence of the surrounding environment, it is easy to cause internal Irregular or incoherent tiny cracks [4]. Sulfate erosion damage is the main form of concrete damage, which often causes concrete expansion and cracking damage, with a large degree of damage. Therefore, sulfate erosion is one of the important factors affecting the durability of concrete structures. Therefore, how to prevent and repair the cracks of cement concrete pavement is very important [5, 6]. Traditional repairs generally use concrete replacement, surface repair, caulking and sealing methods, but most of these methods are passive repairs, and self-repairing methods can repair concrete cracks in time, effectively and timely restore its mechanical properties. In order to explore the influence of load, temperature and many other factors on the self-repair effect of concrete, this paper relies on the rectangular simulation algorithm to try to select the initial width of the crack and the external environmental conditions © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 218–225, 2022. https://doi.org/10.1007/978-981-16-5857-0_26
Analysis of the Effect of Self-repairing Concrete Under Sulfate Erosion
219
(temperature) as variables, and use the flexural strength recovery rate to characterize sulfate erosion Under the repair effect, it aims to explore the application of self-healing concrete.
2 Rectangular Simulation Algorithm It can be seen from the nature of the rectangle that if the three vertices of the rectangle are known, a unique rectangle can be determined. According to this idea, read the three adjacent corner points from the corner point concentration of the target area, set them as point1, point2 and point3, and judge whether they can determine a rectangle in the combined hole area. When point2 is a convex point, there is a rectangle in the area, and the rectangle formed by three points must be outside the target area, so only the former case needs to be considered. When point2 is a convex point, it can be determined as a vertex of the rectangular block to be simulated. If point1 and point3 are also convex points, they can also be directly determined as the other two vertices of the rectangular block; and if they are concave points, some calculations are needed to find the hidden vertices of the rectangular block in the area. The search for hidden vertices can be divided into four directions, up, down, left, and right according to the position of the concave point. The specific direction is determined by the relationship between the corresponding concave point and its adjacent convex points, that is, the direction in which the convex point points to the concave point. If point1 is a concave point, to find the hidden vertex corresponding to point1, the search should start from the direction of point2 to point1, that is, search to the right; for the type shown in the lower left corner, the hidden vertex corresponding to point1 should Start the search from the direction of point2 to point1, that is, search upwards. Similarly, other types of search directions can be obtained. After determining the search direction, the next step is to determine the position of the hidden vertex. When the concave point corresponding to the hidden vertex to be found has a matching concave point, let point1 be the concave point, point4 is the concave point matched with it, pt is the desired hidden vertex, x represents the abscissa, and y represents the ordinate, then: (1) When the search direction is left or right, xpt = xpoint4, ypt = ypoint1. (2) When the search direction is up or down, xpt = xpoint1, ypt = ypoint4. When there is no matching concave point for this concave point, a standard rectangle can be set according to the expected parameters in the corrosion manufacturing process, and search along the search direction corresponding to the concave point. If the distance between the concave point and its adjacent convex point is greater than the side length of the standard rectangle, the concave point is directly used as the hidden vertex of the rectangular block; conversely, if the distance between the concave point and its adjacent convex point is less than the standard rectangle side length, then along the search direction, take the convex point distance equal to The points on the sides of the standard rectangle are the hidden vertices of the rectangular block.
220
L. Zhang and S. Deng
Since the simulated rectangle may not be completely contained in the area, it is necessary to make necessary judgments and merge and delete the generated rectangles to ensure the rationality of the results.
3 Influencing Factors and Evaluation Indicators of Repair Ability 3.1 Influencing Factors (1) The initial width of the crack. In the indoor test, there are two ways to prepare the crack. One is to place an iron sheet of a specific width in advance when preparing the specimen, and slowly extract the iron sheet after the initial setting of the concrete before the final setting. Cracks are formed; the second is to pre-compress the test piece to form pre-cracks of different widths. In order to make the cracks as close to the natural irregular shape as possible, this test adopts the pre-compression method to prepare the cracks. The indoor test shows that: pre-compression load When it is 50%, 55%, 60%, 65%, 70% of the failure load, the corresponding main crack width is about 0.4, 0.6, 1.0, 1.5, 2.0 mm (the crack width adopts PTS-C10 intelligent crack width observation Observed by the instrument). At the same time, in order to prevent brittle failure of the specimen during the test, two φ8 steel bars are equipped in the tension zone. (2) How to place the glass fiber tube. The addition of a hollow glass fiber tube inside the test piece will inevitably reduce the flexural strength of the test piece. The test shows that the flexural strength reduction caused by the placement of 4 glass fiber tubes is significantly greater than that of the placement of 6 glass fibers. The fiber tube time is low, and the flexural strength reduction when the placement method is diamond and trapezoid is 5.7% and 6.1%, respectively, which is smaller than other placement methods. The side effect caused by the flexural strength is minimal. In addition, the position of the glass fiber tube cannot be set too low or too high, otherwise the adhesive cannot repair the cracks above the set position or the adhesive capacity is not enough to flow to the bottom of the crack. The cracks are mainly generated Table 1. Environmental conditions for fracture self-healing Number
Average temperature for corresponding month/°C
Environmental condition
1
30
Constant temperature at 30 °C indoor
2
20
Outdoor environmental conditions in May and June
3
10
Outdoor environmental conditions in April
4
0
Outdoor environmental conditions in March
5
−15
Outdoor environmental conditions in February
Analysis of the Effect of Self-repairing Concrete Under Sulfate Erosion
221
in the longitudinal middle area of the test piece, and show a downward-to-upward development trend. Therefore, in the test, 4 glass fiber tubes are placed in the middle and lower parts of the test piece in a diamond shape and a trapezoid shape. (3) External environmental conditions. China has a vast territory and different environmental conditions; in addition, cement concrete pavement is a pavement structure with a specific thickness, and the temperature at different thicknesses of the road slab will continue to change with the change of air temperature. Therefore, In order to reflect as much as possible the environmental conditions at different panel thicknesses in different seasons (temperature is the main environmental indicator), this paper has collected data and information such as temperature and humidity in each month of a certain area in recent years (the temperature is rising from February to June, and the humidity is falling), the five environmental conditions shown in Table 1 are selected. These five environmental conditions can describe the average temperature conditions in various regions of China, as well as the temperature conditions at different seasons and different panel thicknesses. 3.2 Evaluation Index The repair effect is characterized by the ratio η of the flexural strength of the repaired specimen to the initial flexural strength f of the control specimen (that is, the flexural strength recovery rate). The flexural strength test uses a 10 t hydraulic universal testing machine, and the three-point double concentrated load is loaded, the span between the fulcrums is 300 mm, and the calculation formula is shown in formula (1). The calculation of the flexural strength should be accurate to 0.1 MPa, because the specimen is a nonstandard specimen, it must be multiplied when calculating the flexural strength. The size conversion factor is 0.85. f =
Pl bh2
(1)
In the formula: P is the failure load of the specimen, N; l is the span between the supports, in mm; b is the width of the specimen section, in mm; h is the height of the specimen section, in mm. The calculation of the flexural strength recovery rate η (%) is shown in formula (2): η=
f1 × 100% f
(2)
4 Test Plan 4.1 Sample Preparation The infusion and sealing of the adhesive in the glass fiber tube is carried out in 3 steps: (1) Use a hot melt glue gun to inject about 3 mm of hot melt glue into the glass fiber tube to seal one end of the glass fiber tube, and then wait until the hot melt glue has solidified.
222
L. Zhang and S. Deng
Apply epoxy resin AB glue to the port closed with hot melt adhesive to achieve a doublelayer sealing effect; (2) Use a medical infusion belt cut by a medical syringe to inject adhesive into the glass fiber tube, and lift the infusion while injecting Tape to speed up the injection speed, while avoiding the appearance of air column to the greatest extent; (3) Close the other end of the glass fiber tube filled with adhesive, and the method is the same as step (1). 4.2 Test Technical Route The prepared specimens were cured under standard curing conditions for 28 days and then pre-compressed. After the pre-compression was completed, the specimens were self-repaired under external environmental conditions, and their flexural strength was measured after 7 days. In order to make the crack repair conditions consistent with the actual environmental conditions, the test is carried out in 5 times, and each time 12 specimens are taken, 10 of which are pre-compressed, and the other 2 are subjected to the standard flexural test. Take the indoor 30 °C constant temperature oven as an example, 12 Among the specimens, there are 6 specimens with diamond-shaped and trapezoidal placement of glass fiber tubes, and their failure loads are PA1 and PB1, respectively. The preloading loads of 10 pre-compression specimens are 50% PA1 and 55% PA1, respectively. 60% PA1, 65% PA1, 70% PA1 and 50% PB1, 55% PB1, 60% PB1, 65% PB1, 70% PB1, the same is true for the other 4 environmental conditions.
5 Analysis of Test Results 5.1 The Time-Varying Law of Concrete Compressive Strength after Sulfate Attack After being eroded by sulfate, the concrete test block is taken out from the sulfate solution, even if the length of the corrosion is different, its compressive strength shows a consistent change with the extension of the standing time, that is, the concrete is taken out of the solution for 7 days. The compressive strength of the concrete is higher than that of the concrete just taken out of the sulfate solution; then as the standing time increases, the compressive strength of the concrete gradually decreases. The compressive strength of the concrete specimen coated with the repairing agent is significantly higher than the compressive strength of the concrete specimen just taken out of the sulfate solution after being left for 7 days, and the compressive strength of the concrete specimen after being left for 56 days is higher than that after being left for 28 days. The concrete has been raised. For the concrete specimens without repairing agent, the strength of concrete eroded by sulfate for 30 days is 34.2 MPa after being taken out of the solution, and its compressive strength is 39.6, 37.9, and 36.2 MPa after standing for 7, 28, and 56 days, respectively. With the extension of the standing time, the compressive strength of concrete decreases significantly. The compressive strength of concrete after 60 days of erosion changes with the standing time similar to that of concrete after 30 days of erosion. After being eroded by sulfate for 90 days, the strength of concrete is 33.0 MPa, and its compressive strength
Analysis of the Effect of Self-repairing Concrete Under Sulfate Erosion
223
is 34.2, 32.9, and 32.6 MPa after standing for 7, 28, and 56 days, respectively. It can be seen that the compressive strength of concrete that has been eroded by sulfate will continue to decrease even if it leaves the surrounding sulfate environment. Therefore, in actual projects, even after the concrete structure is isolated from the sulfate environment, in order to prevent the concrete strength. The further reduction of this still requires corresponding protective measures. After applying the repairing agent to the concrete specimens eroded by sulfate, the compressive strength of the concrete is obviously improved. After 56 days of standing, its strength is greater than the compressive strength of the concrete after 28 days of standing. The compressive strengths of the concrete specimens were 41.3, 39.0, and 39.4 MPa after standing for 7, 28, and 56 days respectively; the concrete specimens coated with repairing agent after 60 days of erosion, the compressive strengths of the specimens after standing for 7, 28, and 56 days They are 39.3, 38.6, and 38.4 MPa respectively. After being eroded for 90 days, the compressive strength of the concrete coated with the repairing agent was 36.6, 35.2, and 35.3 MPa after standing for 7, 28, and 56 days. 5.2 The Influence of External Environmental Conditions Figure 1 shows the flexural strength recovery rate of the specimen under different external environmental conditions (temperature) and the external environmental conditions (temperature) when the glass fiber tube is placed in a diamond shape and a trapezoid shape, and the preload degree is 50% to 70%. Relationship curve.
Fig. 1. The change of flexural strength recovery rate with external environmental conditions
It can be seen from Fig. 1 that regardless of the placement of the glass fiber tube is diamond or trapezoid, under a specific preload condition (the preload degree is 50% to 70% of the failure load), the flexural strength recovery rate of the test piece varies with the environment. The increase in temperature increases, and with the increase in temperature, the growth rate of the flexural strength recovery rate is generally accelerating. Among them, when the glass fiber tube is placed in a diamond shape and the preload degree is 55% of the failure load, The flexural strength recovery rate at 30 °C
224
L. Zhang and S. Deng
is 13.3% higher than the flexural strength recovery rate at −15 °C; when the glass fiber tube is placed in a trapezoidal shape and the pre-compression degree is 55% of the failure load, the resistance at 30 °C The flexural strength recovery rate is 12.9% higher than the flexural strength recovery rate at −15 °C; in addition, when the glass fiber tube is placed in a trapezoidal shape and the precompression degree is 50% of the failure load, the flexural strength recovery rate at 30 °C The rate reaches the highest value of 93%. This shows that changes in external environmental conditions have a great impact on the self-healing effect of concrete. Within a certain temperature range, the higher the external environmental temperature, the better the self-healing effect of concrete. The reason for the above phenomenon is that the curing of Great Wall 717 adhesive is affected by various factors such as environmental conditions and the shape of the bonding surface. When the temperature rises, the Great Wall 717 adhesive filled in the cracks can be cured earlier and re-forms a strong whole with the reinforced concrete, which improves the interface bonding strength, and thus the flexural strength of the concrete is restored to a certain extent.. Moreover, because the curing of Great Wall 717 adhesive has a certain time limit, when the temperature is high, the Great Wall 717 adhesive filled in the cracks can be cured earlier within the curing time limit. Therefore, as the temperature increases, the growth rate of the flexural strength recovery rate of the specimens is generally accelerating. 5.3 Influence of Initial Crack Width It can be seen from the results that whether the glass fiber tube is placed in a diamond shape or a trapezoid shape, the self-repairing ability of concrete is the strongest when the preload degree is 50% of the failure load, indicating that the smaller the initial width of the crack, the stronger the self-repairing ability of concrete; When the pre-compression degree changes from 60% failure load to 65% failure load, the flexural strength recovery rate has a sudden drop, indicating that the self-healing concrete with built-in glass fiber tube has a crack width of 1.0 mm (the pre-compression degree is 60% failure The crack width corresponding to the load) and the micro-cracks in the following range have better repair effects. The reason for the above phenomenon is that when the width of the precast crack is less than 1.0 mm, the interface between the adhesive and the concrete is better, and the density between the aggregate and the cement stone at the transition zone of the concrete interface without cracks on the surface is almost the same.; When the crack width is more than 1.0 mm, the interface between the adhesive and the concrete is poorly bonded, which greatly reduces its repair ability.
6 Conclusions Sulfate erosion damage is the main damage form of concrete, which often causes concrete expansion and cracking damage, with a large degree of damage. Relying on the rectangular simulation algorithm, this paper attempts to select the initial width of the crack and the external environmental conditions (temperature) as variables, and uses the flexural strength recovery rate to characterize the repair effect under sulfate erosion,
Analysis of the Effect of Self-repairing Concrete Under Sulfate Erosion
225
aiming to explore the application of self-healing concrete. Practice shows: The smaller the initial width of the crack, the better the repair effect; the repair effect increases with the increase of external environmental conditions (temperature), which shows the effectiveness of this method in the evaluation of self-healing concrete repair effect. Acknowledgements. The study was supported by “Research and Application of Damping and Noise Reducing Road Concrete, China (Grant No. KJQN201805501)” and “Research on application of self-compacting concrete mixed with industrial waste residue in structural engineering” (KJQN202005502).
References 1. Al-Ansari, M., Abu Taqa, A.G., Senouci, A., et al.: Effect of calcium nitrate healing microcapsules on concrete strength and air permeability. Mag. Concr. Res. 71(3), 195–206 (2019) 2. Yang, Z., Hollar, J., He, X., Shi, X.: Laboratory assessment of a self-healing cementitious composite. Transp. Res. Rec. 2142(1), 9–17 (2018) 3. Wu, X., et al.: A new self-healing agent for accelerating the healing kinetics while simultaneously binding seawater ions in cracked cement paste. Mater. Lett. 283(5), 1–8 (2020) 4. Li, W., Dong, B., Yang, Z., et al.: Recent advances in intrinsic self-healing cementitious materials. Adv. Mater. 30(17), 170–176 (2018) 5. Commins, P., Al-Handawi, M.B., Karothu, D.P., et al.: Efficiently self-healing boronic ester crystals. Chem. Sci. 11(10), 89–97 (2020) 6. Jakhrani, S.H., Ryou, J.S., Kim, H.G., et al.: Review on the self-healing concrete-approach and evaluation techniques. J. Ceram. Process. Res. 20(7), 1–18 (2019)
Design and Implementation of Tourism Management System Based on SSH Ping Yang(B) Department of Jewelry and Tourism Management, Yunnan Land and Resources Vocational College, Kunming, China
Abstract. With the increasing pressure of life, more and more people want to have enough time to go to the outside world to release themselves and enjoy themselves. Therefore, tourism has gradually become an important industry. The traditional tourism management system mainly records, statistics and integrates the data generated, lacks the cognition of tourists’ needs, and can not meet the growing pursuit. Therefore, the system scientifically and reasonably increases the personalized function demand on the traditional tourism management system. This paper uses the mature EE platform development, using the current more mature MVC design pattern and SSH architecture, to achieve the management of tourism information, and further elaborated on each functional module; at the same time, it focuses on the research and analysis of the function of scenic spot recommendation module. Keywords: SSH · Apriori algorithm · Data mining · Tourism management
1 Introduction to Key Technologies 1.1 B/S Architecture B/S architecture, also known as browser/server architecture, is mainly composed of browser, web server and database server. But the browser only displays the data simply, the main thing logic processing is realized in the server side. The system using B-frame file only needs a web browser to operate, and does not need to install other things. Its advantages are as follows: (1) The development of operation interface is simple and easy to modify, which can save a lot of manpower, material and financial resources (2) BS architecture has no other special requirements for users. A server or even thousands of users can use it (3) BS architecture is convenient for business expansion, and the burden is relatively small. (4) The operation interface is simple and intuitive, and the data storage is safe and efficient. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 226–231, 2022. https://doi.org/10.1007/978-981-16-5857-0_27
Design and Implementation of Tourism Management System Based on SSH
227
Its disadvantages are as follows: (1) BS architecture has poor compatibility across browsers. (2) Once the number of users increases, the reaction time will increase and the reaction speed will slow down. (3) When users in the process of using, need to constantly refresh the page, more troublesome. (4) Because the server undertakes the main business logic, it needs to load a lot of things, and the load is too heavy. 1.2 J2EE Technology Architecture J2EE is also a kind of specification and standard, which is different from the traditional application development technology architecture. J2EE has the characteristics of onetime compilation and running everywhere. Through a unified development platform, it can reduce the complexity and cost of application, and has comprehensive support for EB, Java, servlet API, JP and Xi technology [1]. J2EE provides an independent layer for different servers to segment the traditional model from different layers. The basic architecture of J2EE is shown in Fig. 1.
Fig. 1. Basic framework of J2EE
1.3 Research on Hibernate Technology Hibernate is a lightweight ORM framework. In addition to retrieving and querying data, Hibemate is mainly used to realize the mapping relationship between Java objects and database tables, so that SQL and DBC developers can reduce the time of data
228
P. Yang
processing. Hibernate is designed to enable developers to appropriately reduce some repetitive programming tasks. It can implement business logic by using data stored procedures. Hibernate is a popular solution in information technology. The state of the hibernate object: (1) Instantaneous The newly created operator, which does not have an object hibernate s identified as transient. Transient objects are not stored in the database, and they are not persistently identified (identifiers). If the transient object is not referenced in the program, it will be garbage collected. You can make it persistent by using VEO. Hibernate will automatically execute the SQL statement in response. (2) Persistent state Persistence strength has a corresponding record in the database, and also has a persistent identity. Persistence strength can be saved or loaded, but in any case, by definition, it exists in the session scope associated with it. Hibernate will check whether the object is consistent with the changes in the persistent state. In the current operation unit, the object data is synchronized with the database. Developers don’t need to perform updates manually. When an object changes from persistent state to transient state, it is also unnecessary to execute the up date statement manually. (3) Offline state When the object and the persistent session are closed, the associated object becomes offline. It may still be useful for objects to continue to be referenced. If you re separate the object and have a relationship with a new conversational workplace, it will change again into permanent. Changes that are out of control will be persisted to the database.
2 Overall System Architecture 2.1 Selection of System Architecture Today, there are two main core architectures, one is B/architecture and the other is C architecture. Both of them have their own advantages and disadvantages. The C/architecture is mainly fast in response and can be submitted to the server after the client has finished processing. However, it is only limited to LAN, and professional client software needs to be installed, and some operating systems will also be restricted: BS file can be used anywhere, and special software can be omitted, which is more convenient to fully consider the comprehensive requirements of the enterprise’s tourism management system, Comparing the advantages and disadvantages of the two architectures, in line with the expectation that the system development is simple, easy to use, extensible and easy to update, then after comprehensive consideration and analysis, the system will use B/architecture [2]. As we all know, tourism is a miscellaneous family. Our professional courses include introduction to tourism culture, tourism culture, Chinese tourism geography, tourism planning and tourism economics. The text overview of professional courses is obvious, so if you recite it before the exam, it’s OK to pass. But if you want to get high marks, you still need a lot of methods in it. You need to combine mathematical analysis with it
Design and Implementation of Tourism Management System Based on SSH
229
to understand the process changes, If high school mathematical analysis is used in the process derivation of tourism affairs, it is also very helpful. 2.2 Application of MVC Mode MVC pattern is a common design pattern for B/architecture. M is the model layer, which is mainly used to process the data logic part of the application. It is the core part and the layer that the program needs to operate. The view layer is used to display the data information, that is, the most external part. It can provide a friendly interaction. C is the control layer, It’s the middle tier. It is mainly responsible for reading the data information in the view layer, and then sending them to the model layer to control the user’s input and control the control flow of the whole system. In order to make our tourism management system more flexible and extensible, we decided to design the application C, which can make the system more powerful. Recently, the most widely used MVC frameworks are struts1 and Struts2. But both tuts1 and Struts2 have their own defects. The feature of $tuts1 is single, and the performance of support layer and servlet API are highly coupled, but the test is difficult, so it belongs to intrusive design four. Although stuts2 is much better than struts1, it also has some disadvantages. For example, the data in Struts2 needs to be stored in the “value stack”. In this case, the memory consumption is high and the efficiency is low. Ring MVC can be said to be integrated with the development of struts 1 and tuts2, with tuts1 and Struts2 incomparable advantages [3]. Basically, our major is to learn where there is fun, and then where there is delicious food, that is, all aspects of tourism will be involved. In our sophomore’s professional courses, we often play all kinds of beautiful scenic spots publicity in class, and some excellent scenic spots commentators explain the process, so you don’t want to doze off. (The more you look, the more energetic you are. You can see the beautiful scenery. With the teacher’s professional explanation and guidance, you have worked out the holiday travel routes.) Teachers in order to exercise our ability, each class will have students go up to the news broadcast, and some courses need to explain the scenic spot (This kind of task is usually to prepare ppt one week in advance, to learn about the corresponding scenic spots, and then to speak to students and teachers as a guide.) Because this is a personalized service major dealing with people, we need to understand psychology, understand the psychological changes of consumers, and make timely adjustments and emergency response methods. I think it’s very obvious that this major is in line with the society. It’s very helpful for me to exercise and integrate into the society in the future. For the second misunderstanding, tourism management major is not a pure liberal arts, EQ and IQ have been improved, and mathematical requirements are also obvious.
3 Analysis of System Module Structure The purpose of the detailed design of the system is to further integrate and optimize the logical objects, data entities and interface logic of some systems. The detailed design of the system mainly includes two aspects of model design: static and dynamic. We usually use class diagram to represent the design aspect of static model, and use entity relation
230
P. Yang
diagram to illustrate the design of database. The dynamic model can be represented by interaction diagram [4]. Database design is to construct database model, design database structure, optimize its application system, and then store data information according to the user’s needs under the environment of specific database management system. Reasonable database structure is very important for a good performance database application system. According to the previous introduction, this system uses MySQL database. Database can provide a good and efficient operating environment for the system, including its access efficiency, storage space utilization, operation and management efficiency and so on. The customer information table is shown in Table 1: Table 1. Customer information table Field name
Type
Primary foreign key
Describe
consumerID
Int
Primary key
Customer number
consumerName
varchar
no
Customer name
contactName
varchar
no
Contacts
contactAddress
int
no
Contact address
Fax
varchar
no
Contact number
staffCode
varchar
Foreign key
Fax
preID
Int
Foreign key
Employee number
groupID
varchar
Foreign key
League number
4 Conclusion In this paper, under the current background, through the domestic and foreign research on the tourism industry, and according to the information needs of reasonable, scientific, practical analysis, so as to explore its implementation method and process. The main work is as follows: first of all, to understand the current tourism companies, through the collection of various literature to study the current domestic and foreign research status of the tourism industry, and then found the shortcomings of the current tourism companies. On this basis, combined with the tourism company’s own and external business needs, this paper studies the feasibility analysis of these business requirements of tourism management system, and then through the process of collection, sorting out and integration, the demand analysis is transformed into the demand of software engineering, and the system function is studied. Basically, it is the consumer Agreement Protection Law and the regulations on the management of tour guides. For the basic points deduction and rights protection agreement, there are the latest policies to understand. The content involved is very broad, including consumer Agreement Protection Law, contract law and economic law (It can be seen that the state vigorously promotes the basic quality of tour
Design and Implementation of Tourism Management System Based on SSH
231
guide practitioners. Now some of the tour guides you come into contact with basically have not got the certificate, so there are some negative public opinions. However, the tour guide who gets the certificate often doesn’t want to be an ordinary tour guide, and he wants to continue to take the exam! We understand the necessity of livelihood.)
References 1. Liu, Q., Duan, Z.: On the impact of computer technology on social development. Comput. Knowl. Technol. 02, 444445+447 (2013) 2. Zhang, Q.: Discussion on the application of communication technology in tourism industry. Shanxi University (2012) 3. Lu, X.: Research on demand oriented tourism destination information system. Chongqing Normal University (2009) 4. Campbell, C.K.: An Approach to Research in Recreational Geography. Department of Geography, University of British Columbia, British Columbia (1967)
Design and Implementation of Trajectory Planning Algorithm for SCARA 4-DOF Manipulator Hongbo Zhu(B) Henan College of Industry and Information Technology, Jiaozuo 454000, Henan, China
Abstract. In recent years, with the rapid development of automobile, steel, construction industry, production efficiency has become the key to the development of these industries, industrial manipulator with its high speed, high precision, long labor time plays a huge role. In view of the important role of trajectory planning in the efficient and stable operation of SCARA manipulator, this paper studies the trajectory planning algorithm with a four degree of freedom manipulator as the control object, and uses trajectory planning to effectively improve the control accuracy and work efficiency of the manipulator. Keywords: SCARA · DOF · Manipulator · Trajectory planning
1 Introduction In today’s life, with the continuous improvement of the level of science and technology, the manipulator is constantly active in production and life. The industrial manipulator (robot arm) is an automatic control equipment with the function of imitating human arm, which can replace human beings to complete various industrial operations. Compared with the working characteristics of human beings in industry, the biggest advantage of the manipulator is that it can continuously reproduce the required action for a longer time, while ensuring high efficiency and high accuracy. Manipulator belongs to robotics, which is the civilization achievement of the rapid development of automation industry in human society since the 20th century [1]. With more and more application fields of manipulator, more and more attention has been paid to the control technology of manipulator, and the basic part of manipulator, such as trajectory planning, has also been put on the agenda. At present, the main application fields of manipulator are automobile industry, including welding, painting and loading and unloading, The working accuracy and motion smoothness of the manipulator are required to be higher and higher. To improve the working efficiency of the manipulator, reducing the error is an important index to measure the performance of the manipulator. Trajectory planning is to describe the task, motion path and trajectory of the manipulator. The trajectory of the manipulator follows the expected trajectory, which makes the motion of the manipulator smoother and more stable, and effectively improves the stability, reliability and work efficiency of the manipulator [2]. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 232–238, 2022. https://doi.org/10.1007/978-981-16-5857-0_28
Design and Implementation of Trajectory Planning Algorithm
233
2 SCARA Manipulator 2.1 Overview of SCARA Manipulator SCARA (selectively Co mpliance a assembly robot arm) robot is a kind of planar joint industrial robot, which has four joints and three rotating joints. Its axes are parallel to each other, and it can realize in-plane positioning and orientation. 1 mobile joint to realize the lifting movement of the end piece. SCARA robot is not only required to have the advantages of high rigidity, high precision, high speed, small installation space and large design freedom, but also can be assembled into welding robot, dispensing robot, optical inspection robot, handling robot and plug-in robot, so it can be used for high efficiency assembly, welding, sealing, handling and handling [3]. 2.2 Composition of SCARA Manipulator The control system of Gugao 4-DOF manipulator is composed of SCARA 4-DOF industrial manipulator, driving electrical box, computer (including DSP motion control card and upper computer control software). For the multi axis control system, the data needs a chip with powerful data processing ability, so the main control chip of the core motion control card is dsp2181 high-speed processor. The general controller needs to deal with complex matrix operation, and the selection of the main control chip must be high-speed and high-precision. In the aspect of communication, standard ISA bus and PCI bus are provided. The real object can be seen in the figure below, in which joints 1, 2 and 3 are rotary joints, and joint 4 is vertical lifting joint. This manipulator provides a design mode of combining computer and development type motion control card. The developer does not spend too much energy on the drive design, but focuses more on the design and implementation of control strategy. Figure 1 is the outline drawing of the 4-DOF manipulator body and the electrical control box. 2.3 Control of 4-DOF Manipulator The robot base class is the core of the program design of the manipulator control system, and there are controller class and planner class. The data structures of manipulator attributes such as degree of freedom, joint state, joint motion speed, joint motion acceleration and joint position are defined in the base class. In addition to the data structure definition, various motion functions of the manipulator are also defined, such as single step motion, S-mode motion, etc. the four degree of freedom manipulator class derived from the robot base class is used to create specific objects [4]. The controller class is equivalent to the set of interface functions, which is mainly for the needs of future maintenance. The virtual function declares the corresponding public properties and interfaces of the motion control card, and derives the SG controller class that matches the actual control card. The control command response function is defined in the SG controller class, The selection of shutdown motion mode: such as servo power on, on/off control card, S/T joint motion mode, origin/limit point capture, etc. The planner class mainly designs the trajectory planning of the manipulator. The class design diagram of the 4-DOF manipulator control system is shown in Fig. 2.
234
H. Zhu
Fig. 1. Outline drawing of 4-DOF manipulator body and electrical control box
Fig. 2. Class design diagram of manipulator control system
3 Kinematics Analysis of Manipulator A robot arm can be regarded as a kinematic chain composed of a series of rigid bodies connected by joints. These rigid bodies are usually called connecting rods. Connecting rods have four basic attribute parameters, namely connecting rod attribute parameters and connecting position parameters of two rods. Specifically, the common normal distance ai and the angle ai (the angle between z axes of two adjacent coordinate systems) in the plane where ai is located describe the connecting rod itself; The position relationship between {i − 1} and {i} can be described by the relative position d i of the two links and the angle θi between the normals of the two links, as shown in Fig. 3.
Design and Implementation of Trajectory Planning Algorithm
235
Fig. 3. Schematic diagram of simplified model of SCARA manipulator
For the forward kinematics solution, the matrix transformation relation of the coordinate system of the link transformation is obtained. For coordinate system {i}, the coordinate system equation of the adjacent link coordinate system {i − 1} can be deduced: ⎤ ⎡ −sin θi cos θi −sin θi cos θi sin θi cos θi Li cos θi ⎢ sin θi cos θi cos αi −cos θi sin αi Li sin θi ⎥ i−1 ⎥ ⎢ (1) iT =⎣ 0 sin αi −cos αi di ⎦ 0
0
0
1
By multiplying the transformation matrix of each link, the pose equation of the robot end effector can be obtained: ⎤ ⎡ nx ox ax px ⎢ ny oy ay py ⎥ 0 0 1 2 0 ⎥ ⎢ (2) 4 T = 1 T (θ1 )2 T (θ2 )3 T (d3 )1 T (θ4 ) = ⎣ nz oz az pz ⎦ 0 0 0 1 The pose equation 04 T is a function of the joint variables of the four end effectors, The arm length and joint angle variables of the manipulator are substituted into the in place attitude equation to calculate the position and attitude of the terminal actuator. When the manipulator needs to move according to the specified position and attitude, the inverse solution of the pose equation, that is, the inverse process of the forward kinematics solution, is of great significance in the real-time operation control system of the manipulator.
236
H. Zhu
4 Trajectory Planning Method and Interpolation Algorithm of Manipulator 4.1 Trajectory Planning The most commonly used trajectory planning of manipulator can be divided into two types, one is trajectory planning in joint space, the other is trajectory planning in Cartesian space. The joint space planning method can obtain the desired pose of the middle point of the trajectory, and can directly use the controlled variables in motion to plan the trajectory; the Cartesian coordinate space trajectory planning method is more intuitive, which directly corresponds to the environment model, In this paper, the Cartesian space trajectory planning method is used. The SCARA manipulator system has three rotation joints. The path points are calculated by the trajectory planner, and the motion control function is called to drive the servo motor to control the position and pose of the manipulator terminal actuator. The control process is shown in Fig. 4. The specific planning steps are as follows: (1) the trajectory is obtained by the sequence shown in Fig. 4, The position and attitude values of the manipulator in Cartesian space are solved by the actuator pose equation; (2) according to the operation constraints of the manipulator and the key path points on the trajectory, the trajectory planner is established to calculate the position and pose of the key path points in the space; (3) through the inverse kinematics solution, the articulation space can be recognized by the manipulator position sensor. The servo motor can be moved to the specified position according to the joint quantity of the inverse solution. The interpolation algorithm is the essence of the trajectory planning process, and the optimization algorithm can greatly improve the path accuracy of the actuator, which has far-reaching research significance for the manipulator movement.
Fig. 4. The control process of trajectory planning of manipulator
4.2 Interpolation Algorithm of Trajectory Interpolation algorithm is the core of the planner. Generally, interpolation can be divided into joint space interpolation and Cartesian space interpolation. The interpolation algorithm directly determines the programming method and execution efficiency of the software design of the manipulator control system [5]. The local control program of the manipulator has involved the scheduling between threads. If the algorithm of the planner is too redundant and complex, the execution efficiency of the software code will be reduced. Therefore, the interpolation algorithm of the planner is not only designed from
Design and Implementation of Trajectory Planning Algorithm
237
the aspects of accuracy, speed and acceleration of the actuator, We should also consider whether it is easy to program and implement. Some design principles and choices of the planner interpolation algorithm are given below. (1) Interpolation algorithm needs to be simple and efficient, and the calculation process is complex. It is better not to have iterative algorithm, which will greatly increase the workload of software and reduce the efficiency of code execution; (2) Interpolation algorithm is convenient for designers to develop code and reduce the redundancy of the system; (3) It can meet the requirements of precision and speed of manipulator. 4.3 Interpolation of Trajectory in Joint Space Trajectory planning in joint space is the realization process of inverse kinematics solution of manipulator. The angle value of joint variable required by motor is the coordinates of interpolation points on the inverse trajectory. The trajectory of joint space can be understood as follows: the base coordinate system {a} is rotated by each axis and then translated to get the tool coordinate system {B}, which can express the function with joint angle as the variable. Each joint is divided into the same motion time at a certain end path, but each joint function is independent of each other. Joint space planning does not need to describe the starting point as well as the path shape in Cartesian coordinate system. Because each joint function is independent, the problem of singularity matrix will not occur. Only when we plan the joint space, we need to give the attitude of the beginning and the end. We only need to plan the joint variable function of the middle node to make the whole trajectory continuous in the whole time. 4.4 Interpolation of Cartesian Space Trajectory Planning The biggest difference between interpolation and joint space is the representation of the position of the terminal actuator. The point sequence on Cartesian coordinate space track is described by matrix, including the position and attitude of the actuator, which are represented by direction vector. The tool coordinate system is transformed in homogeneous order relative to the base coordinate system, which can describe the attitude of the tool, that is, a series of nodes P can be used; (i = 0, 1, 2, 3). approximate representation of trajectory. From node p to the middle section of next node P1, it can be approached by straight line or arc, so that this series of intermediate nodes can be connected to form a track. The planning of node end speed and acceleration also needs to be considered. If the connection is not smooth, the mechanical arm will have the phenomenon of work delay.
5 Conclusion Through the analysis and Simulation of the interpolation algorithm, it is verified that the linear and circular interpolation algorithm can optimize and improve the trajectory of the manipulator. In theory, it can make the interpolation points fall on the required
238
H. Zhu
curve, and improve the control accuracy and work efficiency of the manipulator, It is realized in the control of the four degree of freedom manipulator. Acknowledgements. Application and research of FP-growth algorithm in data mining, Natural Science Research Projects in Anhui Universities in 2020 (Project No. KJ2020A0806).
References 1. Cai, Z.: Robot Guide. Tsinghua University Press, Beijing (2009) 2. Niku, S.B.: Introduction to Robotics. Electronic Industry Press, Beijing (2004) 3. Zhuoyanva, Bai, X., Chen, Y.: Three regular curve interpolation algorithms for robots. Equip. Manuf. Technol. (11), 27–29 (2009) 4. Wang, W., Zhao, J., Gao, Y., et al.: The method of plane curve trajectory planning of robots. J. Harbin Univ. Technol. 40(3), 390–392 (2008) 5. Angeles, J.: Theory, Method and Algorithm of Robot Mechanical System. Machinery Industry Press, Beijing (2004)
Advances in Text Classification Based on Machine Learning Desheng Huang(B) Guangzhou Health Science College, Guangzhou 510925, China
Abstract. This paper will introduce the machine learning system based on text classification technology in detail. Through professional research and investigation, we can find out three advanced studies of text classification technology in the current era, such as spatial dimension reduction, classification methods and evaluation methods. Through various exploration and discovery, we can improve the learning experience and practice of relevant personnel. Keywords: Machine learning · Text classification · Key technologies
1 Introduction Under the development and drive of information technology, machine learning in China has made great progress. The mining and retrieval of information data has become a field of concern. Text classification technology emerges as the times require. With the change and passage of time, the research on this technology has made a breakthrough.
2 A Machine Learning System Based on Architecture Text Classification Technology (1) Key technologies The machine learning system derived from text classification technology has many key technologies. First, word segmentation technology, for text information, must first participle and then extract features. For example, in text information, “Big Brother’s watch, you don’t have to buy it.” The effect of word segmentation will directly affect the effect of feature extraction and change the accuracy of the model. Secondly, vocabulary extraction in specific fields can expand the function and characteristics of vocabulary through vocabulary in specific fields, which can improve the effect of information classification. For example, researchers can use artificial intelligence to specify a word, such as “Omega,” and “Swallow” and “Rolex” can be found according to the relevance of the word with the help of appropriate algorithms and key technologies. That is to say, algorithms can find new words in this field. Third, relevant researchers can also use key technologies to establish an information feature base, through the establishment of the information base to change practical © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 239–244, 2022. https://doi.org/10.1007/978-981-16-5857-0_29
240
D. Huang
experience into feature vocabulary, this process also represents the process of feature extraction. This paper is transformed into machine processing in this form to facilitate the subsequent prediction and training of the model. (2) Module design Before carrying out the normal module design, the relevant researchers should understand the research institutions of machine learning in China and the world, and their data are shown in Table 1.
Table 1. Research institutions related to machine learning Research institutions
Number of documents
Country
Institutional centrality
National centrality
Chinese Academy 320 of Sciences
China
0.03
0.35
Carnegie Mellon University
119
United States
0.01
0.58
National University of Singapore
103
Singapore
0.01
0.03
Table 1 shows that China is in the leading position in the research of text classification technology and machine learning, and the research in this field has made great progress. At present, the related researchers have constructed the system module of machine learning, which includes sample tagging, feature library, word segmentation, model training, single text analysis, batch text analysis and download and save of results. After mastering the functions of the system, the researchers can use each section to analyze the influence of text classification technology on machine learning.
3 Advances in Text Classification in the Context of Machine Learning (1) Space dimension reduction Under the background of machine learning, text classification technology can be studied by spatial dimensionality reduction. For spatial dimensionality reduction algorithms and models, the research still follows the traditional idea, that is, using probability statistics method to compare the significance of its classification. With the development of time, new ideas have emerged in the study of text classification technology. On the one hand, the selection method adopts combination or multistep form, and the initial feature collection is confirmed by the selection of basic features. Then some criteria are used to supplement the feature, and other comprehensive factors are used to add and delete the remaining features. On the other hand, related researchers can use linguistic techniques to study related features, find out
Advances in Text Classification Based on Machine Learning
241
the feature information by hand input, and improve the development level of text classification technology by means of feature extraction and so on [1]. In the process of dimensionality reduction classification, the related personnel should also consider the relationship between the two, that is, find out the changing trend of classifier effect index. The more common phenomenon is that the appropriate dimensionality reduction method can increase the number of features in the classifier. Different researchers have made a scientific comparison of feature selection methods from the angles of obtaining reasonable effect, distinguishing ability and effectiveness, and different combination methods and statistics have different advantages from the results. Different types of classifiers, their dimensionality reduction methods will be different, feature selection algorithms or feature extraction and other common forms should be used in different situations, although the difficulty of method selection is low, but the use of this kind of methods is more extensive. This kind of text classification technology has strong research value. (2) Classification From the point of view of classification mode, the main purpose of the classification method under text classification technology is to improve the classification effect, strengthen the practicability of the internal computing ability and storage function of the system, and master the throughput in the classification process and the extensibility in the learning process. With the rapid development of network information technology, the integrated learning of multiple classifiers has become a common classification method, and the support of vector machine reflects the development level of single-weight mode. The current advanced text classification technology is that the training speed of SVM, in big data set is slow during practical application, which puts forward higher requirements for computing ability and resource storage. The separated surface model generated by advanced technology can effectively reduce the elements of overfitting, redundant features and sample distribution, showing suitable generalization ability. Compared with other methods, SVM has a great advantage in classification stability and effect. With the introduction of term probability distribution similarity or fuzzy theory, the morphology of text classification technology has made great progress [2]. Integrated learning is also called classifier combination or multiple learning. The main means used are coverage optimization or decision optimization. In the process of formal testing, different kinds of classifier decisions should be evaluated or voted, and different categories of system output should be viewed by using the targeted indication of classifier. During the use of the SVM algorithm, the relevant researchers should use the idea of serial integration to process the text vector, and carry out the error correction work for the accurately recognized part, that is, only use the smaller calculation cost to obtain the accuracy of the relevant category; This kind of classifier should confirm the types of test samples, such as Boosting or Bagging, during the application of Boosting, each iteration will pay attention to the previous classification errors, which is better than SVM, this classification method [3].
242
D. Huang
(3) Means of assessment The ROC curve can optimize and evaluate the application effect of each classifier in time. Through the improvement of the internal threshold parameters of the classifier, the ROC space curve can effectively view the overall performance of the classifier. At the same time, the ROC curve is not sensitive to the sample distribution. From the evaluation theory, the complexity penalty and training data error within the classifier can clearly present its performance gap. For some common classification methods, formal analysis can present the optimal display effect of classifier, and the loss function can be divided into model complexity and training loss. Through two forms of comparison, the appropriate classifier evaluation methods can be accurately queried. The relevant researchers can also use different ways of experimental comparison in the process of developing the benchmark corpus, which is more important in the benchmark corpus. It has been tested many times in this version, RCV1 is the latest official corpus. Compared with the previous version, the corpus is more adaptable to multi-layer classification and solves the shortcomings of traditional corpus. In order to meet the needs of current research, the relevant researchers have also extended the classification method, solved the deficiency of data skew, and enhanced the scientific nature of the evaluation means [4].
4 Representation Features of Text The classical text representation model is a vector space model space dimension reduction problem. The method based on evaluation function: usually through the statistics on the training data set to calculate a certain index value of each feature, according to the index value to decide whether to retain the corresponding words or words, or to weight the corresponding features, so as to achieve feature selection. There are mutual information, information gain, word frequency method, Chi probability statistics, expected cross entropy, probability ratio and text evidence weight latent semantic index: using concept indexing instead of keyword indexing, selecting indexing words for the text from the perspective of semantic relevance, regardless of whether the indexing words appear in the text, The word frequency matrix is transformed into singular matrix by singular value decomposition, and the transformed text vector is used for text mining. Principal component analysis: by searching the orthogonal vector that best represents the original data, we create a replacement and smaller variable set to combine the essence of the attribute, and the original data can be projected to this smaller set. There are also some common text dimensionality reduction algorithms, including document frequency based method, document frequency based method and IDF × TF method, simulated annealing algorithm, etc. The two basic tasks of text mining are classification and clustering, which are indispensable in almost all application fields of text mining. Text category is an important content of text mining, which determines a category for each document in the document collection according to the predefined topic category. By classifying documents through Baidong text system, people can better find the information and knowledge they need.
Advances in Text Classification Based on Machine Learning
243
In people’s opinion, classification is the most basic cognitive form of information. The traditional literature classification research has Fenglei’s research results and considerable practical level. However, with the rapid growth of text information, especially the proliferation of online text information on lnternet, text first mover classification has become a key technology to process and organize a large number of document data. Now, text classification is widely used in various fields. However, as the amount of information becomes more and more abundant, people’s requirements for the accuracy and recall of sub content search will be higher and higher. Therefore, the demand for text classification technology is increasing day by day. For example, constructing two efficient text classification indexes seems to be a main research direction of text mining. Text classification methods mainly include decision tree, k nearest neighbor (KNN), association rules, support vector machine (SVM), database based algorithm, Bayesian classification algorithm and neural network, rough set, fuzzy logic and genetic algorithm based on soft computing. Among them, the method based on soft computing provides a flexible data processing ability through cooperative work, and its goal is to achieve the ability to process imprecise, uncertain, partial information and approximate reasoning ability, so as to approach human’s analysis and judgment ability conveniently, robustly and at low cost. Fuzzy logic provides an algorithm to deal with imprecision and uncertainty caused by fuzziness rather than random. Rough set deals with uncertainty caused by indiscernibility. Neural network is used for pattern classification and clustering, while genetic algorithm is used for optimization and search. The development of administrative information system in enterprises and institutions is closely related to the development of information technology. As early as the 1990s, Mr. Bill Gates put forward the following view: information technology will lead to a sharp decline in the cost of communication, just as the cost of computer use falls. When the price of communication cost is low enough and combined with other technologies, “information superhighway” will be as real and far-reaching as “electricity”. Now the Internet has gradually become a bridge that brings people from all over the world together to work and live together, and has built a new platform for human information cooperation and exchange. In the book trilogy of the information age: economy, society and culture, the author thinks that the Internet has become a virtual society with comprehensive functions. With the extensive use of Internet technology, the architecture of administrative information system has changed significantly, that is, from the earliest two-tier architecture based on client/server to the three-tier or multi-tier architecture based on Browser/server.
5 Conclusion Summary: to sum up, with the rapid development of text classification technology, researchers have applied it in many fields, and the practicability of the technology has been greatly improved. Through understanding the research progress of the technology, the changes brought by text classification technology have been better grasped. After effectively solving the problems of performance, category size and data model, the development trend of machine learning based text classification technology has been improved.
244
D. Huang
References 1. Zheng, L., Shen, J.: Analysis of the current situation and development trend of text classification technology. Inf. Comput. (Theor. Ed.) 32(24), 17–20 (2020) 2. Lu, H., Zheng, Z., Huang, Y.: A preliminary study of air traffic control security information processing based on machine learning. J. Civil Aviat. 4(05), 40–44 (2020) 3. Liu, H., Liu, J., Li, J.: Comprehensive utilization technology of pruning branches in orchard. Agric. Mech. Res. (2), 218–221 (2011) 4. Gao, R.: Research and development of cutting residue pulverizer in southern hilly area. Fujian Agric. Mach. (02), 25–31 (2018)
Research on Interior Space Design Based on Ant Colony Algorithm Yi Lu(B) Liaoning Jianzhu Vocational College, Liaoning 111000, China
Abstract. In this paper, based on ant colony algorithm, the space design of a new Chinese restaurant in Kunming, Yunnan Province is taken as an example, the design concept is proposed, and the specific design of the interior space is discussed. Keywords: Ant colony algorithm · Dining space · Chinese restaurant · Space design
1 Horizontal Clustering of Ant Colony Algorithm Ant Kita algorithm is a random search algorithm based on the cooperative behavior of ants. It seeks the optimal solution by population evolution of candidate solutions. According to the accumulated information, the candidate solution adjusts the object structure and communicates with other candidate solutions to produce better solutions. In the traditional ant colony clustering algorithm, ants leave pheromones on the path, and ants follow the path with the largest pheromone. In nature, pheromones will disappear with the passage of time, Moreover, when the pheromone residues on the two paths are similar, the ant well can not find the path with the largest pheromone accurately, which is two important reasons for the premature and stagnation of traditional ant colony clustering algorithm. In this paper, a dynamic pheromone volatilization strategy is proposed to ensure the diversity of ant search paths to obtain the overall optimal solution. It is known that the characteristic vector of each fragment is n-dimensional column vector, and the total number of fragments is n. The classification number of fragments is m. The sum of the distance quotient of various elements to the center of the class is used as the objective function to establish the following mathematical model: min J (w, c) =
M Mj n j=1
cij = round (
i=1
Nj
Nj i=1
i=1
wij xi
p=1
wij xip − xjp
Nj i=1
wij )
wij = N /M )
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 245–249, 2022. https://doi.org/10.1007/978-981-16-5857-0_30
(1) (2) (3)
246
Y. Lu
2 Basic Information of Dining Space The design object is a newly opened restaurant in the north urban area of Kunming City, Yunnan Province. The total area of the dining space is about 600 m2 , in which the wing room accounts for 25% of the indoor space, and the area is about 150 m2 . There are three wing rooms in the dining room, each of which is designed for 8–12 people. The card seat accounts for 50% of the indoor space, with an area of about 300 m2 . There are about 30 card seats, and each card seat is designed for 2.6 people.he meaning of construction network system reliability of construction engineering project is: in the process of project construction, the ability to safely and effectively reach the predetermined project quality within the required completion period and within the limited cost [1]. Therefore, we can understand the construction reliability of construction engineering project from the four aspects of cost, construction period, quality and safety, which is specifically understood as the construction period reliability, cost reliability, quality reliability and safety reliability of engineering projects. Reliability includes qualitative and quantitative meanings, and is generally described quantitatively by reliability. It is specially pointed out that the meaning of reliability and reliability in this paper are the same, and both represent the size of reliability. The reliability of construction work unit of engineering project refers to the ability of each work unit in the construction project to complete the construction task safely and effectively and achieve the predetermined project quality under the required planned cost in the construction process. The reliability of construction network system is based on the reliability of each work unit. Only when the reliability of each work unit is determined can the reliability of the construction system of the whole construction project be determined.
3 Design Concept of the Project The dining room is divided into four areas according to the design concept of “square”. The vertical wall and partition clearly divide the design space into four areas: box, toilet and hall, and make each area independent of each other. The obvious advantage of the square functional zoning is that it gives people a sense of respect from all employees in the restaurant to consumers with a sense of order. Most of the restaurants are open design, with less visual hindrance, which gives people a sense of opening up their horizons. Thus, people’s perception of this dining space is improved psychologically. The opening and closing of the dining space in this design is more prominent. There is no partition with too much sodium. It is similar to the pragmatic attitude and style of the Chinese people and is more acceptable. Figure 1 shows the layout of dining room space. Each “square” area can be regarded as a point independently, and each “square” dining room can be regarded as a point. The points are arranged in an orderly way to form a line and a surface in a virtual way. “Square dining hall is a place for most people to communicate with each other in ordinary social activities. People are generally smooth, so they use more” square “designs to neutralize it”. Moreover, the Chinese people like to be lively. The governor hall is designed in an open style with a large population and square Xin table. This type of design can standardize the space and make the space
Research on Interior Space Design Based on Ant Colony Algorithm
247
Fig. 1. The layout effect of the restaurant
appear orderly. The floor area is less than that of the round shape design, and it also plays a role in traffic. The purpose of “circle” design is to choose the dining environment of the box [2]. Consumers generally need to eat in a private environment. The division of the area in the box is still based on the overall “square”. The porch, screen, dining table and so on use a lot of round shapes as embellishment, so that the consumers of the day can psychologically dim the square shape of the space division, and the space division is simple and clear, “Round embellishment can make the space square and not appear to be the board.
4 Space Design of the Project 4.1 Restaurant Entrance Design The first thing you can see on the right side of the door is the welcome area and the cashier area, which are simple, direct and human. The entrance hall is a regular and irregular combination of bamboo and spray pool, forming an opening effect between the hall and the restaurant. Unique and unique interest. Hall to “square” shape design based, including cuboid line surface ceiling, most of the dining table. The whole table and chair are placed in an orderly way, which is convenient to enter and exit, does not show disorder, saves area, and can reduce the distance of customers’ dining communication [3]. 4.2 Box Design The overall shape of the box fitting platform is more decorative (decoration is the issue discussed later in the paper, not detailed here). There are entertainment facilities in the box space, which are divided into several small area units by independent wall or hollow screen, which makes the overall hollow area division more detailed. The hollow screen also makes the box appear transparent and practical. Sound insulation board is set on the wall surface, when entertainment, such as Kara ok playing mahjong, etc., are not affected by the external environment or shadow Na to the outside is the design effect of dining space box.
248
Y. Lu
4.3 Space Moving Line Design The internal pattern of the dining space should not only reasonably arrange the dining place, but also have a complete moving line arrangement, which is what we often call the flow pattern. As a public space, restaurant design must have a clear flow of people route, the passage should be reasonably arranged. We must make reasonable planning to maximize the use of the function [3, 4]. The interesting moving line is based on reasonable basis. The moving line also plays a part of connection in the division of space and space, and effectively connects each functional area so that it is no longer isolated from each other. The dynamic line design of catering space should pay attention to change and order. In the process of planning the space moving line, we should make a detailed analysis of the overall function of the dining space, and design a reasonable and distinctive space moving line from the perspective of the popularization of consumers. According to the theme design of this design, the space should meet the needs of a large number of consumers. In the space, we have done the cloth cloth. On the passenger flow route, we should not only meet the convenience of the consumption sound to reach each destination, but also meet the convenience of the waiter delivering meals to each dining point. The most important is the escape safety route. 4.4 The Use of Color The function of dining space will be different due to its different types, but all catering spaces have the same basic functional requirements, that is, to appropriately or even promote people to eat. There are two points to pay attention to in order to make Taiwan people eat. One is the color of space (including the color of furniture and tableware). It is suitable to use warm color words such as yellow, red and green, because these colors can activate intestines and easily arouse people’s memories of the food they have eaten, thus increasing their appetite [4]. The second is the color temperature of the light. The color light can make the food more delicious. Chinese restaurants give people a lively feeling. “Food and clothing”, the first thing to solve is to wear warm, eat first to solve the problem of satiety, so the theme restaurant uses warm color as its design color. A large number of warm color decorations are used in the space, such as light yellow wallpaper, solid wood ceiling with sabiri color, and earth yellow floor tiles, which pave the main color of this theme dining space from the four dimensions of the space. The warm color with high brightness and saturation belongs to expansion color, which can broaden the space psychologically, so that the bright space can reduce the crowding feeling when the passenger flow is large. The bright space and furniture can give customers a clean and efficient feeling. As a result, theme restaurant appropriate use of simple light and spotlight this high brightness and high saturation of warm color lighting. 4.5 Space Decoration Design Which part of the dining space should consider Gu’s feeling after entering the space, which can make Gu feel cared by human nature, and the space decoration and Gu Yin interact. The decoration of dining space can be vertically divided into a more private space by walls, glass eggs, etc., and the translucent partition, such as the vassal land and
Research on Interior Space Design Based on Ant Colony Algorithm
249
flower window partition in the form of governor, can be divided into an environment where both the dining space and the dining space can be shared and enjoyed together, and the small space for the purpose can form a regional environmental atmosphere in the psychology of consumers.
5 Conclusion Chinese restaurant design plays an important role in China’s hotel construction and catering industry. For example, Shanghai Pudong Shangri La Hotel, located in Pudong New Area of Shanghai, is packed with cement concrete of modern industrial buildings. The interior uses a lot of colors rich in Chinese elements, such as red, yellow and complete. The Chinese dining hall is very festive, and there are solid wood elements that Chinese people like, Sandalwood and other decorations are everywhere. Chinese food and beverage plays an important role in Chinese people’s mind, so in the choice of theme, restaurant builders often choose Chinese design as the theme of the restaurant positioning. The theme of the restaurant is closely related to the form of dining. In the interior design, such as the Church of light that kind of design is still relatively rare, but in the special space design, there are a lot of flat floor house design, we use some special effects of light in this kind of space can increase the sense of space, distinguish space and so on. Although the space feels that the space is changing all the time, at the same time, there are some special designs in the space. In fact, this is called contrast in the design, and we will open a special design explanation later, but this small cabinet has an orderly arrangement in the disordered space, which is also an array relationship. After all the public spaces are opened, not only the vision becomes wide, the room becomes spacious, and the living habits of each family member are gradually interconnected: the daughter does her homework on the desk, the mother cooks in the kitchen, and the father is on the sofa. Although they do different things, in the open space, there is an elevator. Unlike the ordinary living room is so simple and bright, this case uses light and shadow to divide the space. Just like sculpture, the bright side and the dark side have the sense of threedimensional and artistic, so does space. After the carving of light and shadow, the space becomes more three-dimensional and unique.
References 1. Wu, A.: Exploration on interior space design of apartments in third tier cities. Jiangxi Build. Mater. (10), 85+87 (2020) 2. Hu, J., Zou, X.: Exploration and application of elastic design in interior space. Sichuan Build. Mater. 46(09), 49–51 (2020) 3. Chen, L.: Exploration and research on contemporary “smooth space” architecture and interior space design. Anhui Archit. 27(08), 3–4 (2020) 4. Xu, L.: Interior design style and core of interior space design. Hosshe (20), 27–28 (2020)
Application of Embedded Real-Time Software in Computer Software Design Bin Yang(B) Gansu Radio and TV University, Lanzhou, Gansu, China
Abstract. Nowadays, with the rapid development of China’s economy, computers have been widely used, so the efficiency of computer software design has been highly valued. In the computer software design, the embedded real-time software is applied to improve the product quality and reduce the defects in the software design. Because the embedded real-time software requires high professionalism and strong practicability, this paper analyzes the characteristics of embedded realtime software in computer software design, and studies its application in computer software design, so as to continuously improve the design and development of computer software in China. Keywords: Computer software · Design · Embedded real-time software · Application analysis
1 Introduction The connotation of embedded real-time software can be divided into embedded software and embedded real-time operating system. Embedded software refers to the operating system and development tool software embedded in the hardware, which is used to perform independent functions of the special computer system. In a broad sense, embedded real-time operating system takes application as the center, and takes computer technology, microelectronics technology, hardware technology as the core Based on the control technology and communication technology, it is a special computer system software which has strict requirements on the tailoring, function, reliability, cost, volume and function of the computer system software and hardware system. It emphasizes the cooperation and integration of hardware and software. Embedded software and embedded operating system are inseparable, Because of its strong ability of integration and optimization, embedded real-time software has a wide range of applications, including national defense, industrial control, business office, medical and other fields, such as mobile phone, computer, MP3/MP4, digital camera and digital TV in electrical system, etc., which are the products of intelligent transformation of traditional products by using embedded real-time software. Embedded real-time software is widely used in computer software design. It can not only effectively improve the design of computer software, but also improve the quality of the designed products. Therefore, embedded real-time software occupies a certain © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 250–255, 2022. https://doi.org/10.1007/978-981-16-5857-0_31
Application of Embedded Real-Time Software
251
superiority and necessity in the field of computer [1]. Therefore, through the analysis of the characteristics of embedded real-time software in computer software design, it is very important to further explore its application.
2 Characteristic Analysis of Computer Embedded Real Time Software Computer system refers to the computer hardware and network system used for database management, which is mainly composed of hardware and hardware subsystem and software and software subsystem. Hardware system is an organic combination of various physical components composed by optical, electrical, magnetic and mechanical operation principles, which is the operation entity of computer system; Software system is the use of a variety of programs and documents to control and command the whole computer system in accordance with the prescribed path. The design of computer system actually refers to the design of computer software, which includes system software, supporting software and application software. System software refers to the control and coordination of computer and its external equipment, and computer software supporting the development and operation of application software, generally including operating system, language processing program, data system and network management system. Application software refers to the computer software developed for a specific field and serving a specific purpose, so in a narrow sense, the design of computer software is to optimize the collection, analysis and processing functions of the operation program, data and related documents of each component of the software, so as to realize the efficient and high-quality operation of the computer system. Embedded real-time software is a running platform which needs the assistance of other hardware and software. In fact, it can be seen everywhere in our daily life. The phones, TVs and digital cameras we use are developed and designed with embedded realtime software. The function of the computer composed of embedded real-time software is much stronger than that of other computers. For example, the quality of computer keyboard, hard disk, screen, even mouse and earphone is much better than ordinary computer. Therefore, the advantages of embedded real-time software bring a wide range of prospects [2]. Embedded real-time software in computer software design can design cache mechanism, dynamic distribution, prediction instruction execution and so on. The design process is to use the most important part of embedded real-time software for operation, that is, its embedded microprocessor. Through the processing of this processor, the processing speed and processing technology can be significantly improved, which can not only improve the quality of the designed products, but also bring reliability to the software design. Software and hardware are two components of embedded realtime software used to control computer software system. Embedded microprocessor can design them, so that embedded real-time software can operate multiple tasks at the same time in the process of computer software design, and complete computer tasks in a short time, without damage to the computer, and has a good protection effect. Therefore, the reliability of embedded real-time software makes its application prospects become more extensive.
252
B. Yang
3 Reliability Analysis of ERTS Model With the continuous development of software technology and the continuous enhancement of hardware capabilities, the scale and complexity of real-time systems have increased dramatically. How to ensure that the various important properties of such systems meet the design requirements of people has become a hot issue. In the process of erts development, it is found that analyzing and verifying the important attributes of the system model as early as possible in the software design stage can effectively shorten the software development cycle, reduce the occurrence of errors and improve the software reliability. In addition, the reliability of real-time system is very strict. The functional failure caused by unreliable system may cause huge accident loss. Therefore, it is necessary to analyze and evaluate the reliability of erts in the software design stage. Although a lot of research work has been done on different real-time system software models since the 1960s and 1970s, most of them separate the real-time performance and reliability analysis. However, the fault tolerance mechanism of software often takes a part of real-time performance as the cost, so we must consider the factor of real-time when analyzing the reliability of erts. In order to solve the reliability analysis of erts software design stage, and consider the interaction between real-time and reliability. 3.1 Task Model Assuming that the worst-case execution time of each task and its replacement task is equal, the real-time performance of the system is equivalent to that in the execution process of task Ti , if there is an error, the task will be executed again immediately. Therefore, in the rest of this chapter, unless otherwise specified, the fault-tolerant strategy of the task is assumed to be to re execute the failed task immediately after the error occurs. If there is no error in the process of task execution, this kind of task is called perfect task. In order to describe whether the real-time performance of the task set is satisfied, a function is defined: 1, Schedulable f (Γ, k1 , k2 , . . . , kn ) = (1) 0, Not Schedulable f (Γ, k1 , k2 , · · · , kn ) = 1 means that for each task Ti ∈ Γ , there are ki errors in the execution process of each instance [3]. In the worst case, each task in the task set can be completed within its deadline. 3.2 Reliability Calculation Because the static priority scheduling strategy has good stability when the system is overloaded instantaneously, a large number of erts on security critical real-time systems use this kind of scheduling strategy. Considering task Tj in task set Γ and all higher priority tasks Ti (i < j) on the same processor, assuming that the first mi instances of each higher priority task Ti can preempt
Application of Embedded Real-Time Software
253
the execution of the first instance of Tj , then: rj =
i Classification Q2
Final correct rate Q3
BP
0.9911
0.8503
0.4926
KNN
0.8103
0.9157
0.8655
The accuracy value of the best model after the original data is directly classified by BP and KNN after parameter adjustment is represented by the Q1 value. The regression result data and inherent data are integrated as the prediction input, and the classification accuracy is represented by Q2. The best model is selected and tested with the test set, and the accuracy rate isQ3. The experimental results are shown in Table 2. According to Q1 shown in Table 2, the effect of the direct classification model of the original data is good; indicating that the hotel occupancy rate prediction classification model studied in this paper is effective. The accuracy of KNN is slightly worse. Using the regression experimental results into the classification input, and then performing the
308
X. Xu
hotel occupancy rate classification algorithm, the correct rate Q2 obtained is also quite high, the highest correct rate of BP can reach 0.8503, and the correct rate of KNN can reach 0.9157. Save the model and test it with the test set. The final correct rate Q3 is shown in the table. It can be seen that the correct rate is lower than that of Q1 and Q2. The effect of KNN is OK, which is 0.8655. The BP is 0.4926 worse. The analysis shows that the classification will reduce the correct rate after regression prediction, which is also a reasonable situation. However, it can be seen that the two-level prediction model of hotel occupancy rate in this article is effective and valuable.
5 Conclusions With the rapid development of the tourism industry, tourism data centers across the country are gradually established, and the amount of tourism data continues to grow, which provides a broad space for the development of the hotel industry. Hotel occupancy rate is an important criterion to measure the profitability and management ability of a hotel. This paper proposes a hotel occupancy rate prediction analysis model based on tourism data under big data technology, and implements a hotel occupancy rate prediction analysis model based on tourism data under the environment of processing data with JAVA and Matlab environment training data, and verifies the model’s Effectiveness.
References 1. Rashidi, T.H., Abbasi, A., Maghrebi, M., et al.: Exploring the capacity of social media data for modelling travel behaviour: opportunities and challenges. Transp. Res. Part C Emerg. Technol. 75(FEB.), 197–211 (2017) 2. Salamanis, A., Kehagias, D.D., Filelis-Papadopoulos, C.K., et al.: Managing spatial graph dependencies in large volumes of traffic data for travel-time prediction. IEEE Trans. Intell. Transp. Syst. 17(6), 1678–1687 (2016) 3. Cherchi, E., Cirillo, C., Juan, D.D.O.: Modelling correlation patterns in mode choice models estimated on multiday travel data. Transp. Res. Part A Policy Pract. 96(FEB.), 146–153 (2017) 4. Joh, C.-H., Timmermans, H., et al.: Applying sequence alignment methods to large activitytravel data sets: heuristic approach. Transp. Res. Rec. 2231(1), 10–17 (2018) 5. Mackay, K., Ampt, E., Richardson, J., et al.: Collecting transport and travel data in the Pacific Islands – Fiji’s first national household travel survey. Road Transp. Res. 26(1), 73–83 (2017) 6. Mcarthur, D.P., Hong, J.: Visualising where commuting cyclists travel using crowdsourced data. J. Transp. Geogr. 74(JAN.), 233–241 (2019) 7. Fairnie, G.A., Wilby, D., Saunders, L.E.: Active travel in London: the role of travel survey data in describing population physical activity. J. Transp. Health 3(2), 161–172 (2016) 8. Sodenkamp, M., Wenig, J., Thiesse, F., et al.: Who can drive electric? Segmentation of car drivers based on longitudinal GPS travel data. Energy Policy 130(JUL.), 111–129 (2019) 9. Kontou, E., Liu, C., Xie, F., et al.: Understanding the linkage between electric vehicle charging network coverage and charging opportunity using GPS travel data. Transp. Res. Part C Emerg. Technol. 98(JAN.), 1–13 (2019) 10. Viglia, G., Minazzi, R., Buhalis, D.: The influence of e-word-of-mouth on hotel occupancy rate. Int. J. Contemp. Hosp. Manag. 28(9), 2035–2051 (2016)
Forecast and Analysis of Hotel Occupancy Rate
309
11. Oses, N., Gerrikagoitia, J.K., Alzua, A.: Modelling and prediction of a destination’s monthly average daily rate and occupancy rate based on hotel room prices offered online. Tour. Econ. Bus. Financ. Tour. Recreat. 22(6), 1380–1403 (2016) 12. Assaf, A.G., Tsionas, M.G.: Forecasting occupancy rate with Bayesian compression methods. Ann. Tour. Res. 75(MAR.), 439–449 (2019)
Retention Strategy for Existing Users of Mobile Communications Ying Ding(B) Lanzhou Jiaotong University, Lanzhou 730070, Gansu, China
Abstract. At present, the market of mobile communications industry tends to be saturated, and the main source of users is realized through the conversion between communications carriers. The competition among the three domestic carriers for existing users is becoming increasingly fierce. By using precision marketing theory and big data analysis methods, this paper takes China Mobile in G Province as an example to study the retention strategy for existing users of communications operators in the context of facilitating faster and more affordable Internet connection and developing 4G services, analyze the existing problems, and put forward corresponding optimization measures and suggestions. Keywords: Mobile communications carriers · Retention of existing users · Precision marketing · Big data analysis
1 Introduction After more than 20 years of development in mobile communications industry, the market has been basically saturated, and the proportion of pure new users in the users development share has continued to decrease. The focus of users development among three major carriers has been tilted to the existing users of competitors. On May 13, 2015, the Executive Meeting of the State Council reviewed the issue of speeding up the development of information infrastructure and facilitating faster and more affordable Internet connections, so as to help entrepreneurship and innovation and improve people’s livelihood. The Executive Meeting formally put forward the requirements of facilitating faster and more affordable Internet connection for the mobile communications industry. The three domestic carriers responded quickly and announced their respective schemes of increasing the network speed and reducing network tariff [1]. From the implementation of these programs in recent years, it can be seen that the large-scale reduction of network tariff for the entire communications business that the public is looking forward to does not appear obviously. The price reduction is reflected by binding packages or launching new tariff reduction package services [2]. According to the analysis, the new tariff reduction packages are generally promoted among new users, but they have not covered much among existing users. Most of existing users are still using the old tariffs in the early 4G era, or even the 2 and 3G era. Affected by factors such as the network technology level and social development at that time, historical tariffs are generally higher © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 310–317, 2022. https://doi.org/10.1007/978-981-16-5857-0_39
Retention Strategy for Existing Users of Mobile Communications
311
than current tariffs. This phenomenon reflects that the measures of facilitating faster and more affordable Internet connection have not yet benefited all users in general. At the same time, communications carriers have used this phenomenon to market their own new tariff reduction packages to existing users of competitors and poach users from competitors.
2 Status Quo of Communications Industry 2.1 Analysis of the External Environment With the development of 4G services, mobile phones, high-speed data, and broadband services have become the most important sources of mobile communications operation services. The homogenized services of the three carriers have gradually evolved differentiated strategies. Among them, by the use of group customer bundling, broadband bundling and price, China Telecom has attracted new users successfully. And the the increase of number of new users in China Telecom has surpassed that in China Mobile. And China Unicom ranks first in the number of new 4G users among the three major carriers [3]. China Unicom has used the Internet cooperation card to open up new ideas for user development, and quickly attracted a large number of young consumers, winning the first opportunity in the mobile Internet service competition. China Mobile competes others with the global sharing of call duration and high-speed data services. However, the above-mentioned measures have quickly became common to the three major carriers [4], and do not maintain the long-term tariff. Facing the strong catch-up of the other two carriers, China Mobile in G Province, which has a huge existing users (about 60% of the total number of users), is facing a severe test on how to maintain the leading position in the industry, how to maintain existing users and maintain the advantage of users share. 2.2 Analysis of the Internal Conditions 2.2.1 User Psychology The measures of tariff reduction launched by China Mobile has been well promoted among new users and users who are concerned about mobile tariff updates. However, for most users who account for an absolute proportion, the usage habits have been developed and the communications tariffs are basically stable, and they will not take the initiative to understand the new tariffs. On the contrary, users have a novel mentality for competing products, and are interested in the new lower tariffs promoted by competitors. After comparing their own higher-priced old tariffs, they tend to fall to tariffs promoted by competitors. 2.2.2 Extensive Management China Mobile in G Province has a fixed thinking. For a long time, the focus has been on the development of new users, but it does not care enough for existing users.
312
Y. Ding
2.2.3 Weak Market Sensitivity In the face of the differentiated competition strategy launched by the other two carriers, China Mobile in G Province does not take timely measures. It does not launch the corresponding services until the lost users had a certain scale. At this time, the scale effect of the lost users triggers the herd psychology of the associated users, which makes the number of the lost users continue to expand.
3 Precision Marketing Strategy 3.1 Precision Marketing Strategy for Existing Users In order to make up for the tariff gap of old users, we have launched services for each type of segmented user group, such as constant package price, free monthly traffic and call duration, free broadband, resource sharing rights and interests, etc. In addition, in order to ensure stable income, we can realize free content subscription through tariff transfer and solidify original income contribution of users. 3.1.1 Product Segmentation With reference to the new tariff reduction standards introduced in the current market for facilitating faster and more affordable Internet connection, 12-level existing products with prices ranging from 18 RMB to 288 RMB, the same bonus content, and different bonus resources are launched, and the price of the products is composed of the value of voice and high-speed data services. 3.1.2 User Segmentation The segmentation of target customers is the basis of precision marketing. According to the principle of the model and the subdivision product, the total cost of voice and high-speed data services generated by users is taken as the subdivision standard, and the big data technology is used to extract the sum of voice and high-speed data services costs generated by users under the current ordering product. When the costs are between two adjacent stalls, the customization is classified as the target user group of high-end products, which aims to transfer users to new products with more resource usage at the same price, and to compensate for the difference between new and old products. 3.1.3 Implementation of Precision Marketing Facing the segmented existing target users, we mainly introduce the new tariff content to users, inquires about the user’s handling intention, and handles it asynchronously after the end of the outbound call. And we publicize and handle new tariff for the users on site by the method that supplemented by the recommendation of the business hall. 3.2 Analysis of Precision Marketing Strategy The holding products adopt the precision marketing theory, and expect to achieve good marketing effect after being put into the market. But in the first month of implementation,
Retention Strategy for Existing Users of Mobile Communications
313
the success rate of daily marketing is less than 2%. By sorting out the marketing feedback and using the big data customer portrait data, this paper analyzes the main reasons for the poor marketing effect of the holding products as follows. (1) The definition of user requirements is not comprehensive. In order to ensure that the revenue will not be affected and realize the migration of products at a low price, the precision marketing strategy only extracts the user’s expenses in voice and high-speed data services for a period of time, but the needs and communications tariff composition of users are complex and changeable. Such a definition cannot fully reflect the user’s needs. (2) No perfect channel for precision marketing strategy has been established. Precision marketing requires establishing a two-way effective communications channel between customers and companies, and adjusting their marketing strategy in the feedback of users [5]. This precision marketing channel is mainly for outbound calls. This method is only open when recommending services to users. Without the establishment of user feedback channel, it is impossible to know whether the customer needs are accurately grasped and how customers evaluate the marketing related services [6]. (3) A complete dynamic precision marketing system has not been established. The user needs and market environment are constantly changing. This precision marketing strategy has not carried out tracking analysis and effective management on the users who have completed the tariff transfer. This method fails to actively adjust the products to meet the needs of users, and fails to completely realize the circular improvement mode of precision marketing [7].
4 Optimization Measures for Precision Marketing Strategies 4.1 To Comprehensively Analyze Users’ Needs and Optimize Product Adaptation Rules of Users Due to the different tariff standards in different periods, the historical costs of existing users cannot reflect the actual resource usage needs of customers under the current tariffs. It is somewhat one-sided to use only the cost of voice and traffic services as the user segmentation standard [8, 9]. Therefore, it is necessary to build a user tariff experience model with the use of big data technology, segment users with the criteria of consumer characteristics and consumer behavior, and adjust the matching rules of users and segmented products (Table 1). This paper takes rule 2 as an example to illustrate the matching rules of users and segmented products. The original product ordered by a user is a package of 75 RMB, and the final cost of voice and high-speed data services is 104.34 RMB. According to the original target product adaptation rules, the product with price of 128 RMB is the marketing recommendation product of the user. After a comprehensive analysis of the user’s needs through the user’s fee physical examination model, the original product contains 150 min of voice call time, but the actual use of voice time is 294 min. And the high-speed data is 150M, but the actual use is 62.7M, and the voice use exceeds the package content. Rule 2 of products with 98 RMB (including 300 min of voice time)
314
Y. Ding Table 1. Matching rules for users and product segments
Category
Voice resource usage
Rule 1
Usage of traffic resources
Total cost of voice and traffic
Rule description
Super set Super set
real_fee
First, the target product is the lowest grade product that can meet the demand of resource use based on the actual voice and flow resource usage. Then, it is weighed against the actual consumption. For example, the difference between the actual cost and the target product price is within different threshold, and the target product is the main product. Otherwise, the high-end products with the actual cost falling in the price range of the two adjacent products are the final target products
Rule 2
Super set Not over set
real_fee
First, based on the actual usage of voice resources, the lowest grade products that can meet the demand of voice resources are selected as the target products, and then the actual consumption is weighed. If the price difference between the actual cost and the target products is within the different threshold, the target products are the main ones, otherwise, the high-grade products whose actual cost falls in the price range of two adjacent products are the final target products
Rule 3
Not over set
real_fee
First, the lowest grade products that can meet the demand of flow resource use can be matched based on the actual flow resource usage, and then the new products are weighed against the actual consumption. For example, the difference between the actual cost and the target product price is within different thresholds, and the target product is the main product. Otherwise, the final target product is the high-grade product whose actual cost falls within the price range of the two adjacent products
Super set
(continued)
Retention Strategy for Existing Users of Mobile Communications
315
Table 1. (continued) Category
Voice resource usage
Usage of traffic resources
Total cost of voice and traffic
Rule description
Rule 4
Not over set
Not over set
real_fee
Based on the actual cost of users, high-end products that fall in the price range of two adjacent products are matched as target products
can meet the user’s needs, and the price difference with the actual consumption also meets the value of different threshold. At the same time, in terms of user psychology, products lower than the consumption amount are easier to be accepted, which improves the success rate of marketing. 4.2 To Establish Perfect Channel for Precision Marketing We should build up marketing channels such as business halls, outbound calls, and text messages, and coordinate marketing methods with new marketing channels such as APP, official accounts, and Internet enterprise channels to expand channel promotion capabilities and strengths. And we should also combine with the characteristics of various channels to open up effective channels for users’ information feedback, set up special departments to collect and analyze, actively understand the changes of users’ needs, timely optimize the precision marketing scheme, and establish a perfect two-way circular information communications channel [10]. 4.3 To Establish a Dynamic Precision Marketing System According to the evaluation of the effect of existing precision marketing and the original existing precision marketing strategy, unstable users account for 9.8% of the successful users, of which the proportion of activity termination and off-line customers is as high as 75%. It further shows that the original precision marketing strategy is not accurate enough in the analysis of users’ needs, and the successful users are not effectively managed, which increases the hidden danger of user loss. Combined with the tariff experience model, the user stability model is introduced to establish a complete and dynamic precision marketing system (Fig. 1).
316
Y. Ding
Fig. 1. Stock retention precision marketing system
5 Conclusion This paper mainly studies the situation and retention strategies of existing users in 4G era and the background of facilitating faster and more affordable Internet connection, and comprehensively analyzes the internal and external market environments in the retention of existing users in China Mobile. In addition, this paper takes China Mobile in G Province as an example to analyze problems in precision marketing strategies, and proposes some corresponding optimization measures to greatly improve the success rate of marketing. The products and services of the mobile communications service industry are restricted and controlled by national policies, and the flexibility of products is weak. In order to better realize precision marketing, it is necessary to improve capabilities of precision marketing modeling and big data analysis, and gradually find a marketing point between income, products and user needs through accurate acquisition of user needs, complete marketing channel support, dynamic and continuous improvement of precision marketing system. And we should also grasp the initiative of users’ needs, proposes the most accurate and most suitable service, and finally realizes the effective retention of existing users and continues to maintain the advantage of user share.
References 1. Facilitating Faster and More Affordable Internet Connection _ Interactive Encyclopedia. http://www.baike.com/wiki/%E6%8F%90%E9%80%9F%E9%99%8D%E8%B4%B9. Accessed 13 Apr 2020
Retention Strategy for Existing Users of Mobile Communications
317
2. Hao, X.: An article to interpret the three major operators’ measures of facilitating faster and more affordable internet connection. http://www.zjknews.com/news/guonei/201505/16/106 958.html. Accessed 16 May 2015 3. Huang, Z.: Research on precision marketing strategy of china unicom in city of Putian in the era of 4G. MBA Education Center of Nanhua University, Hunan (2018) 4. Chen, T., Wang, Z., Chen, K.: Constructing marketing vision to help business operations. Inform. Commun. (10), 257–259 (2016) 5. Li, J.: Design and implementation of stock user management subsystem. Hunan University (2017) 6. Li, X.: Research on the marketing strategy of SZ telecom company’s stock customers. Soochow University (2017) 7. Wu, F.J.: Policy-based “Internet +” under the stock Guangxi Telecom Marketing. TWICE (05), 1–2 + 8 (2020) 8. Chen, J.: Precise marketing of telecom stock management based on big data. Inform. Commun. (10), 229–230 (2019) 9. Zhou, C.: Carriers’ mobile inventory user retention method based on subnet model. Telecommun. Eng. Technol. Standa. 32(07), 71–75 (2019) 10. Yang, Z.: Research on the marketing strategy of BZ telecom company’s stock customers. Yanshan University (2019)
Design and Research of Heterogeneous Data Source Integration Platform Based on Web Services Yaodong Li(B) and Kai Hou Guangdong Power Grid Co., Ltd. Power Grid Planning Research Center, Guangzhou 51080, Guangdong, China
Abstract. With the rapid development of computer technology, many scientific research institutions and high-tech companies have established scientific research management system. But these systems are often not coordinated and planned. The technical platforms, data structures and data storage technologies used by these systems are different. These differences lead to the formation of a large number of heterogeneous data sources, which makes the data between the systems unable to be shared, the exchange efficiency of data information becomes low, and finally the integrity it lowered the level of efficiency in each industry. In this paper, two high-tech companies are selected as the research object of this experiment, and the effect of designing a heterogeneous data source integration platform based on Web services is discussed. The results show that the design efficiency and the number of successful designs of the platform are higher than those of Y company without web services. The highest design efficiency and the highest success rate of W Company are 97.81% and 100% respectively, while the corresponding data of Y Company are 82.93% and 62.5%. Keywords: Web service · Heterogeneous data source · Integrated platform · Design research
1 Introduction The rapid development of computer technology also promotes the rapid development of scientific research management information construction. Many scientific research institutions and high-tech companies have established their own scientific research management system in order to better carry out scientific research work, and they have produced and accumulated a large number of scientific research data in the process of scientific research. Due to the nature and responsibility of these scientific research management institutions, the scientific research data collected and managed are different [1, 2]. However, for the needs of scientific research and sometimes when managing data, other scientific research data will be referred to. In this case, these scientific research management institutions will need to exchange scientific research data, so these scientific research data are related to each other and not exist alone. However, due to the large © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 318–324, 2022. https://doi.org/10.1007/978-981-16-5857-0_40
Design and Research of Heterogeneous Data Source Integration Platform
319
number of scientific research management systems and the large number of scientific research data, it is difficult to exchange data safely and effectively between scientific research institutions. Therefore, it is a difficult problem in data sharing between scientific research institutions. The emergence of Web Services provides a powerful condition for solving this problem [3, 4]. Heterogeneous data source integration platform can realize data information sharing among major scientific research institutions, because it can provide a unified data access portal for each scientific research institutions. Because of the redundant data and sometimes repeated functions development among various scientific research management institutions, these factors greatly affect the progress of the development of scientific research system, which makes it difficult for many researchers to make full use of the existing data value, which will scientific research also lead to the inefficient management. Web service is a remote call technology, which can be cross language and platform. Researchers can use Web technology to establish a connection between the data source system and heterogeneous data source integration platform in different places, so that the two can operate mutually [5, 6]. Web services can also be used to solve the shortcomings communication between middleware methods, so the advantages and effects of web services are very large [7, 8]. The emergence of Web services has provided great help for data exchange among the major scientific research institutions, which can provide most of the functions required by Information Services for each institution. It can solve the data sharing problem between different heterogeneous data source integration platforms, and can help the data management system of various scientific research institutions to exchange data safely and effectively [9, 10].
2 Technical Research on Tourism English Based on Association Rules Mining Algorithm 2.1 Web Services Web Service is a distributed computing architecture, and it is a service-oriented architecture. Compared with the traditional integration technology and distributed object technology, web service can realize the data sharing among various data security systems, and the data can be cross region and cross cross source language. It has good encapsulation, high integration, loose coupling and high expansibility, so it can provide standard program interface for computer system, and it can also cross firewall and reintegrate software and data. Therefore, web service can be widely used in various fields and this kind of data can also be heterogeneous data. Using Web service technology, we can solve the problem of data sharing between heterogeneous data source platforms. 2.2 Heterogeneous Database Heterogeneous database is composed of a large number of database systems. It can not only access data efficiently, but also share data due to the correlation between systems. In addition, due to the high degree of self-sustaining between individual databases,
320
Y. Li and K. Hou
each independently manages its own database module and is related to each other, which ensures the integrity and security of heterogeneous databases. It also realizes the transparent access of data. Users can regard it as an ordinary database and access it in their own way. Users only need to care about how to access, not about the management of data. That is to say, the object-oriented, we only need to pay attention to the process without paying attention to the details. Transparent access is realized by using many middleware technologies. 2.3 Data Integration Technology (1) Federated Database Method Federated database is the product of multiple database sets and a special form of multi database system. Usually, these individual databases are highly selfsustaining, and they cooperate in a set. The principle of cooperation is to establish the connection between databases, and realize the mutual access between databases through the interaction network, so as to achieve the purpose of data sharing. If we want to realize the data sharing between the database, then from a mathematical point of view, we need a (A-1) Rules to realize the data sharing between databases. Therefore, in the case of various data sources, its implementation is more complicated and tedious, and it is not recommended to use. (2) Method Middleware Database often different from the method we use, we need the data is stored in various data sources, but the system only provides a query mechanism. This query mechanism is based on virtual integrated view. Users send requests through their unique integrated processing mechanism, and the system will automatically respond to the requests and transform them into queries from heterogeneous data sources. The biggest difference between the two methods is the structure. Middleware method is mainly realized through middleware and wrapper. Middleware is equivalent to a forwarding station. After processing the user’s query request, it processes the data from the wrapper, optimizes the conflict data and return it to the user. As the name suggests, wrapper is to encapsulate the underlying data into a pre-defined data model. In this method, the data found by users is the latest amount, because it does not need to store remote data sources, so it is suitable for the integration of heterogeneous data sources with large number of integration, high degree of autonomy and fast update. However, this method is only suitable for structured data, and it is difficult to integrate semi-structured data. Moreover, for the distributed data sources that need to be accessed through the Internet, there is a lack of interoperable communication components, and it is difficult to ensure the security of data. (3) Data Warehouse Method Through the database query mechanism provided by the system, users can query the data center of the required data sources to obtain the data they want. This method is defined as data warehouse method. The system needs to process and maintain the data in the data warehouse. Because it takes a long time to filter, transform and other preprocessing before the data is stored in the data warehouse, the data queried by users lacks timeliness, so if the heterogeneous data sources change frequently, it is not suitable to use this method.
Design and Research of Heterogeneous Data Source Integration Platform
321
2.4 Operation Methods Involved in This Experiment We will use some algorithm formulas in the statistical calculation of experimental data, as well as some operation formulas in the collection and analysis of experimental data. These formulas provide great help for our collection, statistics and calculation of experimental data. The following are the formulas applied in this paper: √ −b ± b2 − 4ac (1) x= 2a x2 x3 x + + + . . . , −∞ < x < ∞ 1! 2! 3! n n xk an−k (x + a)n = k=0 k
ex = 1 +
(2) (3)
3 Experimental Research on Tourism English Based on Association Rule Mining Algorithm 3.1 Selection of Experimental Subjects In order to study the effect of Web services on the design of heterogeneous data source integration platform, we specially selected two Internet companies as the research objects of this experiment. The two Internet companies are w company and Y company respectively. Then the two companies adopted different service methods when designing their heterogeneous data source integration platform, among which W Company used different service methods when designing heterogeneous data source integration platform according to the source integration platform, web services are used for the design, while Y Company still adopts the conventional design method. 3.2 Implementation Steps of Experimental Research In order to study the impact of Web services on the design of heterogeneous data source platform and what kind of impact it has, we select some relatively easy to obtain and relatively scientific experimental data as the judgment standard of this experiment. Therefore, we selected the efficiency of platform design and the number of successful platform design of the two companies as the measurement index of this experimental study. At the same time, in order to ensure the reliability of the experimental data, we carried out a number of experimental tests to get a number of experimental data results, and then recorded, counted and analyzed these experimental data results.
4 Experimental Analysis of Tourism English Based on Association Rules Mining Algorithm 4.1 Platform Design Efficiency Comparison of Two Internet Companies We conducted five groups of experimental data surveys on the design efficiency of heterogeneous data source integration platform of the two companies, and the experimental data results are as follows:
322
Y. Li and K. Hou Table 1. Comparison of two platform design efficiency between the internet companies Company W Company Y Group one
83.76%
Group two
77.88%
87.55%
72.36%
Group three 88.93%
79.80%
Group four
93.64%
82.93%
Group five
97.81%
80.71%
Fig. 1. Comparison of two platform design efficiency between the internet companies
From Table 1 and Fig. 1, we can clearly understand the design efficiency of these two companies. The design efficiency of the first group of W company is 83.76%, the design efficiency of the second group is 87.55%, the design efficiency of the third group is 88.93%, the design efficiency of the fourth group is 93.64%, and the design efficiency of the fifth group is 97.81%; while the platform design and the design efficiency of V company is 77.88% 97.81% 72.36% efficiency of the second group is, 79.80% in the third group, 80.71% to 82.93% in the fourth group and the fifth group. Although we can see from Fig. 1 that the design efficiency curves of the two companies are not in a straight line, the trend of the design efficiency curves of the two companies is different. The trend of the design efficiency curve of W company is always upwards, while the trend of the design efficiency curve of Y company is unstable. First of all, we can see the design efficiency curves of the first group to the second group of Y company. The efficiency curve of the second group to the fourth group is upward, but from the four groups, the design efficiency curve of the company is downwards, which shows that the
Design and Research of Heterogeneous Data Source Integration Platform
323
design efficiency of the company is in an unstable state, and from the data in Table 1 and the position of the two curves in Fig. 1, we can know the position of Y the design efficiency of the company is lower than that of W Company. 4.2 The Number of Successful Platform Designs of the Two Companies In this experiment, we conducted experiments on eight tracking the design of heterogeneous data source integration platform of these two companies every month. Finally, we counted and recorded the times of the successful design of these two companies. The length of the investigation was four months, and the four months were divided into four groups. Finally, the detailed data results are shown in the following chart as shown in the figure below (Table 2 and Fig. 2). Table 2. The number of successful platform designs of the two companies Company W Company Y Group one
6
3
Group two
7
2
Group three 6
5
Group four
4
8
Fig. 2. The number of successful platform designs of the two companies
We can see from the above chart that the design success times of these two companies are not the same. Times the design success of the first group of W company are 6 times, the second group are 7 times, the third group are 6 times, the fourth group are 8 times, and the success rate of this group is 100% ; while the design success times of the first group of Y company are 3 times the number of successful designs in the second group was 2,
324
Y. Li and K. Hou
the number of successful designs in the third group was 5, and the number of successful designs in the fourth group was 4. Among these four groups of data, the highest success rate was 62.5%. Therefore, from the design success times of the above two companies, the design success times of W company are much higher than that of y company, which also shows that Web services can bring positive effects to the design of heterogeneous data source integration platform, and can improve the success rate of design.
5 Conclusions In a word, this paper investigates the design of heterogeneous data source integration platform based on Web services. The experimental results also show that Web services can greatly improve the design efficiency and success times of heterogeneous data source integration platform, which shows that it is advisable to apply Web services for the design of heterogeneous data source integration platform the exchange of visits between heterogeneous data sources is greatly simplified, which facilitates the data exchange between heterogeneous data sources, and the heterogeneous data sources can be expanded through web services, so as to form new heterogeneous data sources. The service system solve the difficulties encountered in the data exchange between multiple databases.
References 1. Du, Y., Xing, W.: Research on online examination system integrating heterogeneous data sources. Softw. Eng. 20(002), 47–49 (2017) 2. Wan, X., Yao, Q.: Construction of data and application integration platform based on heterogeneous system. Med. Health Equip. 037(002), 61–63 (2016) 3. Wang, X., Zhang, Y., Xu, M., et al.: Development of agricultural information integrated network management platform based on heterogeneous data integration technology. J. Agric. Eng. 33(23), 211–218 (2017) 4. Xie, X., Zhang, B., Zhao, G.: Analysis and integration of heterogeneous data source integration technology of digitization oilfield. China Manage. Inf. 21(380 (14)), 40–41 (2018) 5. Zhang, D.: Research on the design of hospital information system integration platform. Med. Inform. 40(9), 21–25 (2019) 6. Lu, X.: Research on heterogeneous data source integration technology based on online examination system. Fujian Comput. 34(06), 88–89+143 (2018) 7. Qian, Y., Shi, Q.: Research on University decision support service platform based on multisource heterogeneous data sources. China Educ. Inform. (005), 50–53 (2020) 8. Xiao, Y., Xie, G., Zhen, L.: Framework design of ecological technology platform and integrated evaluation system. J. Resour. Ecol. 008(004), 325–331 (2017) 9. Xu, C.: Many research on enterprise data integration technology of logistics chain for industrial chain collaboration platform. Heilongjiang Sci. Technol. Inform. (002), 155–156 (2017) 10. Wang, C., Guo, Y., Tan, C., et al.: Research on construction and application of data warehouse for infrastructure projects. China Manage. Inform. 19(021), 163–168 (2016)
Ethics of Robotics Applications Kai Li1 and Zhen Meng2(B) 1 Department of Marxism, Sichuan University, Chengdu, Sichuan, China 2 Party School of the Central Committee of CPC of Ili Kazak Autonomous Prefecture
(Academy of Governance of Ili Kazak Autonomous Prefecture), Yining, Xinjiang, China
Abstract. Although artificial intelligence (AI), especially robotics technology, has gained rapid growth and been applied in many areas, bringing numerous positive outcomes, it has also resulted in many ethical concerns, most notably in the development of AI and robot interaction technology. To better realize the “ benign interaction between man and machine” and open a new era of intelligence in which man and machine coexist harmoniously, it is necessary to coordinate efforts in strengthening legislative research, formulating ethical standards, improving safety standards, establishing a regulatory system, and promoting global governance in order to effectively prevent and respond to the multiple ethical issues caused by robots in the process of design, R&D, production, and use. Keywords: Internet · Artificial intelligence · Robotics · Ethical issuess
At the Conference of Academicians of the Chinese Academy of Sciences and Chinese Academy of Engineering in 2014, General Secretary Xi Jinping said that robots could be the entry point and growth point of the ‘third industrial revolution’ [1]. The immense wealth generated by the “robot revolution” and the vast market it opens up have validated that prediction. At present, with the rapid advancement of artificial intelligence (AI) technology, the application and development of robotics has also been accelerated, resulting in both positive and negative outcomes. On the one hand, robots play a constructive role in fostering people’s emancipation to some extent as extensions of human labor tools and body organs. On the other hand, robots also cause a series of ethical issues that must be addressed, which has aroused widespread concern and encourages in-depth research in the international community.1 Therefore, successfully addressing the ethical concerns, rising up to the challenges, and making robotics better support the growth of human society, have enormous and far-reaching theoretical and practical implications for ushering in the harmonious age of human-computer interaction. 1 For example, at the first International Symposium on Robo-ethics in Sanremo, Italy in January
2004, the word “Robo-ethics” was formally proposed. The Report of COMEST on Robotics Ethics jointly released by the United Nations Educational, Scientific and Cultural Organization (UNESCO) and the World Commission on the Ethics of Scientific Knowledge and Technology (COMEST) in 2017, the Development Plan on the New Generation of Artificial Intelligence issued by China in 2018, the National Artificial Intelligence Research and Development Strategic Plan updated by the United States in 2019 have all explained the ethical issues of robots. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 325–330, 2022. https://doi.org/10.1007/978-981-16-5857-0_41
326
K. Li and Z. Meng
1 Overview of the Application of Robotics ˇ Former Czech writer Karel Capek coined the word “Robot” in Rossum’s Universal Robots. In Czech, it refers to slaves, servants, or those who are compelled to serve others. Its practical applications can be traced back to the invention and use of automated devices, such as the Babylonian clepsydra as a chronometer, compass, Wooden Ox and Gliding Horse, which is a fleet of wheelbarrows used by Zhuge Liang’s army for material and food, and the ancient Greek automata. With the continuous optimization of robotics, robots have gone through several phases, from a single-domain body to a multi-domain complex, and their types have been continuously specified. Industrial robots and service robots are the two types of robots at the macro level. At the micro level, there are five different types of robots. First, there are industrial robots. The term “industrial robots” was first time proposed by the United States to describe “an automatic machine of modern manufacturing which is integrated with mechanical, electronic, manipulation, computer, sensor and AI technology.” They are capable of doing dangerous, difficult work and working in harsh environment. This is beneficial not only to increasing labor productivity, but also to speeding up modern industrialization. The second kind is military robots. Military robots are mechatronics automatic devices that are specifically used in the military field and have certain functions identical to those of the human body, allowing them to perform certain military tasks in their place [2]. They have different uses and huge military potential and role in achieving military targets. Third, service robots. Service robots are products of the modern rapid pace of life and an aging population, as “a semi-automatic or fully automatic robot”, are often used to replace or assist humans in cleaning, nursing, and other activities. They can also be used in equipment maintenance fields, such as firefighting, tour guides, security, housekeeping, nuclear processing robot and companion robot. The fourth is medical robots, which are used in hospitals and clinics to support physicians in performing certain advanced and complex medical tasks. “Assistant robots for brain neurosurgery operations, aided endoscopic surgical robots, disability robots, rescue robots, prosthodontic robots, remote-controlled robots, and hospital intelligent errand robots” [3] are the most common divisions of labor. Fifth, there are space robots such as drones, spacecraft application robots, and planet detection robots “that perform operational tasks in space, primarily used for space construction and assembly” [4]. It is the truth that the classification of robots has progressed from simple to complex by looking at the basic creation phase of robots. Currently, as science and technology advance, robots are becoming more prevalent in many areas, including human life and production. As a master of technology, the “intelligent robot” can now adapt to a variety of environments and perform various tasks, leading to the growth of productivity and the realization of a better life. However, the empirical fact that there are certain accidents and ethical problems in robot application should be carefully considered. In 1989, for example, intelligent robots injured humans, and the IBM “Deep Blue” killed people by releasing electric current after being humiliated by chess masters. Much of this has served to remind people of the risks of numerous ethical problems arising from the use of robotics. As a result, robots can only better support the future growth of human society if a thorough review is conducted and appropriate solutions are proposed.
Ethics of Robotics Applications
327
2 Specific Manifestations of Ethical Issues in the Application of Robotics Robotics is a kind of “high-tech that includes computer science, artificial intelligence, bionics, cybernetics, mechanism, information sensing technology and other disciplines”. [5] While various types of robots built with this as the center can play a positive role in the real situation, the limitations of AI and robot interaction technology often produce some ethical issues and negative effects that should not be overlooked. 2.1 Ethical Issues in the Development of Artificial Intelligence Technology First, smart manufacturing technology may raise ethical concerns. To begin with, some conventional jobs and labor are replaced by intelligent robots, increasing people’s anxiety. Factories must take industrial robots work continuously for 24 h to pursue high profits and increase work productivity, causing many on-the-job employees to feel overwhelmed and concern that they will be replaced because of their lower abilities as compared to robots. To some extent, this will have an effect on the factory’s overall performance. Furthermore, in smart factories, there is reciprocal exclusion in humancomputer interaction or human-computer collaboration. Some smart factories still rely on touch panels for human-computer interaction. When workers see the advantages of robots that are “efficient and full of beans,” they will eventually become concerned that their jobs will be replaced one day in the future, and they will feel dissatisfied or hostile toward the robots. When such frustration builds up to a certain point, it explodes, potentially resulting in human-machine animosity. Second, intelligent service technology, also known as intelligent service machines, which refers to robot applications in the service sectors, may raise the following three ethical concerns. Robot toys, for starters, trigger ethical issues. Robot toys that are becoming increasingly intelligent will help children grow up happy, stimulate imagination, and improve their intellect. Such toys, in particular, will assist the only child in developing a positive character. However, the ethical questions that it raises cannot be overlooked. For example, a closer friendship with a robot dog can obstruct the socialization process by separating a child from peers in the real world. In addition, the loss or damage of the beloved robot dogs or toys may cause mental harm to this child, resulting in some bad or inappropriate behaviors. The second concern is caused by nursing robots. Despite the fact that the nursing robot will care for, assist, and protect the elderly, the elderly may develop an emotional dependency on the machine as time passes. When the nursing robot has abnormal conditions, they may feel bereft, anxious, and possibly depressed. Another cause of ethical issues is the use of drones. “Black flight” of drones, for example, will disrupt regular aircraft operations, triggering delays or cancellations. Criminals can also use drones to monitor the privacy of others, especially of female groups. This not only jeopardizes personal interests, but it also creates social panic and has a direct impact on social security and stability. 2.2 Ethical Issues Caused by Robot Interactive Technologies First, there are robot interactions. It primarily represents the desires or demands of designers or creators, which are driven by the utilitarian concept. Swarm robotics, for
328
K. Li and Z. Meng
example, allows robots to interact and sense one another. When a robot experiences an emergency, such as a traffic jam or a deviation from the path, nearby robots may detect and fix it automatically. If the developers put these swarm robots in a pentagonal shape, then the interaction willingness between robots or individual robots is a manifestation of utilitarianism. However, the issue is whether robot interactions pose a danger to designers or consumers, or whether robot interactions in smart factories would result in potential worker strikes or robot killings. While it is unclear if such ethical and moral concerns will arise as intelligent robot technology advances and its applications expand, more active attempts should be made to address these issues in depth in order to prevent such issues to the greatest extent possible. The second issue is caused by human-computer interactions. Not only utilitarian philosophies, but also questions of morality and power, should be considered when developing human-computer interactions. Consider the case of “nursing robots”. First, the use of medical robots will decrease the circle of activities of the elderly to a certain extent, which could lead some elderly people to develop relatively withdrawn personalities, or cause the elderly to spend all their days interacting with robots. Second, since a nursing robot tracks and cares for the elderly in real time, it has Internet and big data functions, which could expose the elderly to privacy and security concerns about personal information leakage. Studies have shown that information leakage events in the era of Internet and big data have skyrocketed from “2, 323 in 2012 to 5, 183 in 2019”. [6] In addition, when doctors use robots to perform surgical procedures, it is very likely that malfunctions in the robot system will result in the patient’s death or permanent disability. Who should be held accountable for these accidents? Is it true that the medical robot raises or lowers medical costs? Will people be deprived of human treatment as a result of the promotion of hospital mechanization using assistant robot technology? Are nurses in charge of nursing or are nurse assistant robots in charge? Will the latter’s use in the medical field lead to new psychological and physical dependence? With the continued advancement of robotics, certain issues must be addressed and solved.
3 Strategies to Deal with the Ethical Issues of Robotics Applications Any ethical questions arising from the use of robotics are objectively present. Indeed, “people have started to think about social ethical concerns such as the social positioning of robots, issues of free will, ethical actors, and issues in human-robot interactions” [7], and some forward-looking and effective avoidance schemes and clear directions have been proposed. In order to properly address the ethical challenges raised by robotics applications and make them safer and more controllable, a number of concerted steps must be taken, including enhancing legislative research, developing ethical codes, improving safety standards, creating a regulatory framework, and fostering global governance. First, strengthen legislative research to achieve a positive relationship between law and technology. The use of robotics in many fields has certainly promoted the productivity and the liberation of people to some degree, but it has also caused some social problems that cannot be ignored. This necessitates using the full force of the law to effectively govern, direct, and facilitate the safe applications of robotics, and realizing
Ethics of Robotics Applications
329
the “benign relationship between law and technological advancement”. In particular, sound legal frameworks and detailed laws and regulations are needed to fairly restrict the reach of robotics usage and application “borders” with clear provisions. The most important thing is to ensure that people are prioritized and that their rights and wellbeing are prioritized. It is also important that the applicable laws are not static, but must be dynamic as such changes are required to adapt to the emergence of new circumstances as robotics technology matures. Second, formulate ethical principles and apply them throughout the entire process of the robotics applications. The establishment of reasonable ethical standards is an important prerequisite for the harmonious coexistence of humans and robots. The public’s acceptance of robots will only be increased if the issue of how robots “make decisions” is solved. In reality, it has been the subject of in-depth discussions in all sectors of society, and certain results have been achieved. For example, Chinese Prospects for the Standardization of Robot Ethics, published by Peking University Press, examines the issue of robot ethical norms and offers some ideas and reference value for the human-robot symbiosis. When formulating related ethical guidelines, it’s important to have a clear objective in mind that all robot activities should adhere to human basic value standards and acceptance patterns within the constraints of ethical and moral standards. At the same time, in order to prevent the existence of robot ethical issues to the greatest extent possible, it is important to reinforce the normative position of ethical codes and have them pervade all aspects of robot research and development, production, and application. Third, improve safety standards and promote safety certification of artificial intelligence products. One of the most important concepts to remember in the development and use of AI products is safety. The “only way to ensure artificial intelligence safety” is to effectively improve artificial intelligence safety standardization. [8–10] There are at least two ways to boost its safety performance while avoiding ethical problems. To begin, a greater emphasis must be placed on improving AI and related robot research and development standards, such as chip design, fingerprint security, biometric information security safety, and others, as well as continuously improving the system’s security from a technical standpoint. Second, to effectively prevent any controllable issues from being out of reach and the occurrence of moral and ethical problems, efforts must be made to actively improve the safety certification of intelligent systems, strictly control research and development efficiency, and inspect when goods leave the factory. Fourth, establish a supervisory system and improve AI technology and product oversight. The use of intelligent robots has a high degree of safety after strict checks at the R&D stage and when goods leave factories. However, because AI technology is still in the early stages of development, and because the intelligent system is continually learning and experimenting in its implementation, certain issues or possible hazards may not be immediately apparent. As a result, increasing monitoring and establishing a supervision framework are critical steps toward successfully improving safety and resolving ethical concerns. To realize the entire process and all-around supervision of AI and related intelligent products, it is first important to create an open and transparent artificial intelligence supervision framework. Second, to better control the robot industry’s actions and encourage AI companies to uphold a high level of self-discipline, penalties
330
K. Li and Z. Meng
for breaches should be increased, and certain abuses of artificial intelligence technology for the sake of benefit should be seriously penalized. Five, promote global governance and work together to address the threats and challenges posed by AI. The invention of AI is a significant step forward in human society, and it has emerged as a pioneer and accelerator of economic and social progress. However, the multifaceted threats and multi-field challenges it presents require countries to confront and join hands to address. To begin, countries should uphold the principle of a “scientific community” and actively engage in research and governance of robot ethics, safety and other issues. Second, it is important to continue to build an international cooperation and exchange platform on which each country can work in greater joint efforts to promote the growth of human society and help people in all countries live better and happier through closer interactions and deeper cooperation.
References 1. Xi, J.: Speech at the Seventeenth Academician Conference of the Chinese Academy of Sciences and the Twelfth Academician Conference of the Chinese Academy of Engineering. People’s Publishing House, pp. 7–8 (2014). (in Chinese) 2. Intellectual Property Office of Guangdong Province: Patents Leads Industrial Innovation, Strategic Emerging Industries Patent Navigation Series Set of Guangdong Province (1). Intellectual Property Publishing House, 204 (2018). (in Chinese) 3. He, X.: College Military Theory Course, 163. Beijing Institute of Technology Press, Beijing (2018). (in Chinese) 4. Yang, Z.: Made in China 2025, High-end CNC Machine Tools and Robots, 109. Shandong Science and Technology Press (2018). (in Chinese) 5. Dong, K., Liu, M.: Imitating Human Intelligence: The Development of Robot and Artificial Intelligence, pp. 49–68. Shanghai Jiao Tong University Press, Shanghai (2004). (in Chinese) 6. Gu, M., Zhao, H., Dong, T. (eds.): Technology and Application of Service Robot, 137. Southwest Jiao Tong University Press, Chengdu: (2019). (in Chinese) 7. Wang, L., Zhang, Y., et al.: Introduction to Electronic Information Science and Engineering, 270. Tsinghua University Press, Beijing (2014). (in Chinese) 8. Chen, C.: Ethical conflicts and countermeasures of intelligent manufacturing. J. Hubei Polytech. Univ. (Hum. Soc. Sci.) (2), 65 (2021). (in Chinese) 9. Shaoyuan, W.: A new field of applied ethics: a review of robot ethics abroad. J. Dialect. Nat. 4, 147–151 (2016). (in Chinese) 10. Big Data Security Standards Task Force of National Information Security Standardization Technical Committee. White Paper on Security Standardization of Artificial Intelligence (2019). http://www.cesi.cn/201911/5733.html. (in Chinese)
The Effectiveness of Technical Analysis in the Era of Big Data Zhilei Jia(B) School of Economics, Shanghai University, Jiading District, Shanghai, China
Abstract. Some literatures pointed out that my country’s capital market has reached a weak-form efficient state. According to the weak-form efficient market hypothesis, technical analysis at this time is invalid. However, with the gradual prevalence of big data, technical analysis is still favored by various institutional investors, and even more and more retail investors have begun to learn technical analysis. Under the situation that the efficient market hypothesis has been recognized by the majority of scholars, this article has made some reflections on this phenomenon from the perspective of behavioral finance, and believes that some stocks in my country are still applicable to technical analysis methods. However, with the gradual development and maturity of my country’s securities market, investors will gradually become more rational, and technical analysis methods will eventually be ignored by people while promoting market efficiency. Keywords: Efficient market hypothesis · Technical analysis · Behavioral finance · Big Data
1 Introduction Since Eugene Pharma put forward the efficient market hypothesis, various scholars have conducted a large number of empirical tests on the theory. For example, in my country’s capital market, Song Songxing and Jin Weigen [1] believe that the Shanghai stock market has reached a weakly effective state. However, there are also some theories that question it. Among them, the biggest challenge to this theory is behavioral finance, such as the existence of momentum effect. Many scholars have made some progress in the research on the momentum effect of the securities market. For example, research by Cheng Bing [2] and others have shown that there is an obvious profit momentum phenomenon in my country’s stock market, and the momentum effect is more significant when the market is in a bull market stage. Liu Xiaolei [3] grouped according to the cumulative abnormal return rate, and the empirical results show that momentum trading strategy still has a good investment effect in my country. However, with the continuous advancement of empirical research at home and abroad and the gradual development of theories, scholars have increasingly recognized the efficient market hypothesis, and believe that behavioral finance is the promotion and supplement to Eugene Pharma’s theory of market efficiency. Since the market is effective, © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 331–336, 2022. https://doi.org/10.1007/978-981-16-5857-0_42
332
Z. Jia
technical analysis should also withdraw from the stage of history, but the reality is not the case. One possible explanation is that when the market is in a valid state, technical analysis is indeed invalid, but in the real world empirical analysis cannot cover all the stocks in the market, so technical analysis methods still apply to the stocks excluded from these empirical analysis and some new stocks. Besides, with the advent of the big data era, the curtain of technological innovation represented by cloud computing has made it easier to use data that originally seemed difficult to collect and use. For example, Xi Wenshuai [4] discussed how to properly invest in stocks in the context of the rise of big data; Wang Chunfeng [5] and others used the developed financial data system to mine media information and discussed the influence of media information on stock returns. Through the processing of big data, technical analysis can be applied more widely to these stock information that could not be processed, and the pricing of these stocks will gradually tend to its intrinsic value. At this time, technical analysis may gradually be ignored by people.
2 Efficient Market Hypothesis and Technical Analysis Eugene Pharma proposed the efficient market hypothesis in 1970. This hypothesis believes that if all stock-related information is quickly captured by the market, the market will reach an effective state. It includes some preconditions, that is, the entire market has no friction and is fully competitive, and the cost of information A rational investor who is zero, homogeneous, and pursuing utility maximization. Specifically, specifically, there are three states of market effectiveness: a weak effective market that already contains historical information, a semi-strong effective market that contains public information, and a strong effective market that contains all information (Fig. 1).
Fig. 1. Information set in three states
When the capital market is in weakly efficient state, all historical information has been fully reflected by market prices. If the market is in a valid state, the technical analysis method of the stock price will be useless. Since technical analysis is useless in a weakly efficient market, why have a large number of investors still favor technical analysis so far? In order to understand the relationship, this article first introduces technical analysis. Technical analysis methods use chart analysis to study market behavior to achieve the purpose of predicting future market price trends [6]. The earliest method of technical analysis was founded by American Charles Henry Dow, and his followers Nelson, Hamilton, etc. carried it forward and summarized it as Dow Theory.
The Effectiveness of Technical Analysis in the Era of Big Data
333
Technical analysis has three prerequisites: (1) Market behavior is inclusive and digestible; (2) Price evolution is inertial; (3) History will repeat itself. Although technical analysis has also been widely controversial, it is undeniable that technical analysis methods are still enthusiastically sought after by many investors, which may be related to investor behavior.
3 Behavioral Finance Theory and Big Data The intrinsic value of a security is an important factor that determines its market price, but it is not the only factor. The psychology and behavior of investors also have an extremely significant impact on the determination and changes of prices. This is an important point of behavioral finance theory. Different from the traditional theory that people are rational, behavioral finance believes that people tend to trust their own subjective judgments to make investment decisions, which are irrational or at least not completely rational. In addition, behavioral finance poses three challenges to efficient market theory: investors are not all rational, rational deviations are not independent, and arbitrage restrictions. Behavioral finance also poses some challenges to the empirical test of weak-form efficient markets. For example, Rajnish Mehra and Prescott [7] first discovered the phenomenon of stock premium in 1985, and called this phenomenon the mystery of stock premium. After studying the relevant historical data of the USA over the past 100 years, they found that the return rate of stocks was 7.9%, while the return rate of the corresponding risk-free securities was only 1%, of which the premium is 6.9%. The stock return rate far exceeds the return rate of Treasury bills. In addition, they also studied the data of other developed countries from 1947 to 1998, and found that there are also different degrees of premiums between the two securities. Another well-known anomaly is the small company effect proposed by Banz. Through his research, he found that the market value of stocks will show a decreasing trend as the company grows. Of course, there are many abnormal phenomena that cannot be explained according to the efficient market hypothesis, such as the momentum effect in the securities market. In the capital market, the momentum effect can also be called the inertia effect [8], which indicates that stock returns will continue the trend of previous returns. For example, stocks with high previous returns will continue to maintain higher returns in the future, while stocks with poor returns It will also continue to maintain its own yield disadvantage. If institutional investors use technical analysis methods to draw historical stock charts and believe that momentum strategies are effective, they will buy stocks that have performed well in the past and sell stocks that have performed poorly in the past and this investment method is called a momentum strategy. Since most investors lack the corresponding theoretical knowledge and cannot make rational judgments, they follow the strategies of institutional investors. In this case, institutional investors can obtain excess returns by using technical analysis methods to seize the opportunity and using the herd behavior of stockholders. However, Zhang Qiang and other scholars have concluded that the momentum effect is not significant during the bull market, while the momentum effect is more obvious during the bear market. One possible explanation here is the
334
Z. Jia
prospect theory, that is, this difference comes from investors’ different attitudes towards losses and gains [9]. Investors have obvious aversion to losses, that is, people are more sensitive to the decline in welfare level compared to the rise in welfare level. That is to say, the negative effect of a certain amount of loss is greater than the positive effect of the same amount of income.
Fig. 2. Diagram of prospect theory
It can be seen from Fig. 2 that with the current situation as the origin, the right side of the ordinate represents the profit area, and the other side represents the loss area. The inflection point of the value function is located at the origin of the coordinates. Like the traditional utility function, the value function above the reference point is a concave function; while the part below the reference point is a convex function. People show the characteristics of risk aversion for losses, but show the characteristics of risk preference for returns. The slope of the value function below the reference point is much higher than the slope above the reference point. This situation indicates that the utility increased by gains is lower than the utility reduced by losses. The momentum effect during the bear market is more obvious, perhaps because people are more inclined to gamble in the face of losses in the hope of reducing losses. Corresponding to the momentum effect is the reversal effect, that is, the future trend of the stock will reverse relative to the previous one. To put it simply, stocks with high performance in the past may suddenly decline in the future, while stocks with poor returns in the past have yields getting better and better. If at this time institutional investors use technical analysis methods to draw the historical trend of stocks and implement a reverse strategy, that is, sell stocks with good yields before buying stocks with poor yields, and further trigger stockholders’ herd behavior, Institutional investors can still obtain excess returns. It can be seen that whether it is to implement a momentum strategy or an inverse strategy, it is first necessary to use technical analysis methods to understand the historical trend of stock returns, and then trigger the herd behavior of stockholders to obtain excess returns. This leads to an interesting question, that is, whether institutional investors originally believed in these two strategies (that is, stocks do continue to maintain inertia like analysis or suddenly experience reversals), or are they using these two strategies to induce investors Herd behavior. For example, in the “January effect” of the US stock market, stock managers will collectively sell stocks in December, and stockholders will join in one after another, causing stock prices to fall. When the stock price was low, the managers bought the stock again, and as stockholders joined again, the stock price generally rose in January. Here, the managers use the reverse strategy twice to obtain the excess returns, which can not help but make people think about it. But what is certain is that with the widespread use of technical analysis, the market will become more and
The Effectiveness of Technical Analysis in the Era of Big Data
335
more effective. Just like the “weakening of the January effect”, if only some people know that the stock price will rise in January, they will buy the stock in January and wait for it to rise. However, when more and more people know this effect, Some people are unwilling to sell stocks in December and wait for the rise in January to sell, so that the stock price in December will not drop that much, and it will not rise that much in January, and the January effect will be weakened. A question worth pondering is that it is assumed that institutional investors are rational, while investors are irrational at the beginning. Since institutional investors have more information, they can understand the historical return trend of stocks based on technical analysis, and make corresponding judgments based on market conditions to make decisions (of course, they may make a mistake in judgment). Since stockholders lack the corresponding knowledge reserves, the best strategy for them is to follow institutional investors to make corresponding decisions, thus triggering herding behavior. However, as time accumulates and learns, the rationality of stockholders is also increasing (perhaps they have also learned technical analysis methods). When institutional investors use technical analysis methods again, since investors also have their own judgments and are almost the same as those of institutional investors, the herd behavior will not be triggered at this time, and the price of stocks is also reflected in its intrinsic value, The market will become more and more effective. It can be seen from this that under a weakly efficient market state, technical analysis will indeed lose its effect, but it should be noted that human rationality is not innate, but a process of slow learning, which requires a certain amount of time to accumulate. In addition, when faced with new stocks, even institutional investors may lack a comprehensive understanding of them. In other words, when facing new stocks, most investors may again lack rational judgment, and technical analysis is still valid at this time. Scholars’ judgments that my country’s capital market has reached an efficient market are mostly based on some sample stocks. This is because these stocks have given investors a relatively rational level, but they still lack a corresponding understanding of more small stocks. This is also an important reason why technical analysis can enable institutional investors to obtain excess returns. Although they are not completely rational when facing new stocks, they are still much more rational than stockholders. In addition, they also have richer theoretical and technical knowledge, which allows them to seize the opportunity. When they are rational, investors are still in a partially rational state, and this opportunity becomes a source of their income. When the stockholders are also completely rational, the opportunity for excess returns will disappear. The unique advantages of big data make it possible to dig deeper into some previously unusable information. This may be another reason why technical analysis methods are still popular. Like the buzzwords in the IT industry such as artificial intelligence and cloud computing, big data is also a hot spot in the industry. The reason why people in the industry pursue big data so much is inseparable from the use of commercial value in data security, data analysis, data warehousing, and data warehousing. With the advent of the era of big data, big data thinking has increasingly changed the research paradigm of modern finance and expanded new research fields. Big data frees finance from the shackles of data samples. Large sample or even full sample research based on massive data has increasingly become the consensus of Western financial scholars,
336
Z. Jia
and the diversification of data sources and types has also promoted the diversification of modern finance research.Different from traditional data, big data has four obvious characteristics: massive data scale, fast data circulation, diverse data types and low value density. The biggest difference from traditional data is the massive data scale, which is so large that “a data collection that greatly exceeds the capabilities of traditional database software tools in terms of acquisition, storage, management, and analysis” [10]. These advantages of big data allow investors to obtain a wider range of data to predict the stock market.
4 Conclusion Although according to literature research, our country has reached a weakly effective market state but has not yet reached a semi-strongly effective state. Investors cannot use the historical information of stocks to obtain excess profits. However, in view of the late start of my country’s capital market, the relatively backward related regulations, and the rise of big data technology, there are still some stocks that are still suitable for technical analysis methods. In fact, a large number of institutions and investors have a strong interest in technical analysis. They use traditional indicators or newly developed and monopolistic indicators as the main basis for investment decision-making, and they have used big data analysis techniques to gain experience in the capital market. Success, but also experienced failure. It is believed that as investors gradually become rational and the reserves of relevant theoretical knowledge increase, with the development of big data analysis, technical analysis will gradually withdraw from the stage of history while promoting market effectiveness.
References 1. Songxing, S.: Empirical test of shanghai stock market effectiveness. Economist (4) (1995) 2. Cheng, B., Liang, H., Xiao, Y.: An empirical study of momentum and reversal investment strategies in my country’s stock mark. Res. Financ. Issues (7) (2004) 3. Xiaolei, L.: An empirical analysis of the applicability of momentum trading strategy in domestic stock market. China Secur. Futures 000(007), 47–48 (2011) 4. Wenshuai, X., Shen Ruixin, L., Wenying, J.N.: Discussion on stock investment strategies in the era of big data. Mod. Econ. Inf. 11, 295 (2016) 5. Chunfeng, W., Jiayi, L., Zhenming, F.: Research on the relationship between media attention and stock returns under big data. J. Tianjin Univ. (Soc. Sci. Edit.) 18(02), 103–108 (2016) 6. Li, W.: Discussion on the wave theory and its application in the Chinese stock market. Hunan University 7. Mehra, R., Prescott, E.C.: The equity premium: a puzzle. J. Monet. Econ. 15(2), 145–161 (1985) 8. Wang, C.: Momentum effect or reversal effect? Dongbei University of Finance and Economics (2018) 9. Shuogang, Z.: An empirical study on herding behavior in china’s stock market. Econ. Forum 13, 32–35 (2009) 10. Shu, L., Yuying, L.: Management accounting and enterprise core competitiveness in the big data era. Financ. Account. 000(013), 66–67 (2016)
Innovation of E-commerce Business Model Based on Big Data Wenjie Chen(B) Anhui Institute of International Business, Hefei 231131, Anhui, China
Abstract. With the rise and development of big data (BD) technology, in the e-commerce environment, a variety of new business models have emerged. Only when companies have a clear business model can they effectively formulate development strategies. At the same time, business model innovation has also become a research hotspot in theory and practice in the era of network economy. The purpose of this article is to study the innovation of e-commerce business (EB) models based on BD. This paper verifies the feasibility of realizing business model innovation based on data analysis. This article selects the cross-border e-commerce in the e-commerce industry for in-depth research, analyzes the components of the cross-border EB model based on BD technology, and then further analyzes the driving force that drives the continuous innovation of the business model. With the rapid changes in the external environment of enterprises, customer orientation to meet individual needs and customized services has become the core business focus of the enterprise. Based on predecessors’ thoughts and theories on BD and enterprise business model innovation, and based on these thoughts and theories, conduct a systematic discussion on the driving force of business model innovation. Finally, a new path of business model innovation is proposed. The experimental research results show that user experience is the biggest consumer demand, which accounts for 0.93, followed by affordable price, which accounts for 0.88. In short, business model innovation still needs to cater to consumer needs to improve, so as to realize the business model. Keywords: Big Data · E-commerce · Business model · Innovation
1 Introduction With the rise and development of BD technology, in the e-commerce environment, various new business models have emerged one after another. Only when companies have a clear business model can they effectively formulate development strategies [1, 2]. At the same time, business model innovation has also become a research hotspot in theory and practice in the era of network economy [3, 4]. The emergence of information technology represented by cloud computing, the Internet of Things and the Internet has had a profound impact on traditional business models [5, 6]. At the same time, more relevant research scholars in academia have begun to pay attention to how to better apply BD to the innovation process of business models [7, 8]. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 337–344, 2022. https://doi.org/10.1007/978-981-16-5857-0_43
338
W. Chen
In the research on the innovation of EB model based on BD, many scholars at home and abroad have conducted research on it, and achieved good results. Foss and Hacklin believe that the business model is composed of products, information flows, and services., A system that can describe the source of profit and the benefits of stakeholders [9, 10]. Taeuscher and Barth believe that a business model is a structure that constructs the relationship between revenue streams and costs, and creates revenue to enable enterprises to survive indefinitely [11, 12]. This paper verifies the feasibility of realizing business model innovation based on data analysis. This article selects the cross-border e-commerce in the e-commerce industry for in-depth research, analyzes the components of the cross-border EB model based on BD technology, and then further analyzes the driving force that drives the continuous innovation of the business model. With the rapid changes in the external environment of enterprises, customer orientation to meet individual needs and customized services has become the core business focus of the enterprise. Based on predecessors’ thoughts and theories on BD and enterprise business model innovation, and based on these thoughts and theories, conduct a systematic discussion on the driving force of business model innovation.
2 Research on the Innovation of EB Model Based on BD 2.1 Elements of an EB Model (1) Product flow In the context of the rapid growth of my country’s e-commerce industry, market demand and product technological changes have promoted the industry’s maturity. The main products of my country’s cross-border e-commerce platform are mainly superior products in Greater China. Representative products include 3C digital, fashion gardening and electronic auto parts, which occupy a relatively dominant position in international trade. In terms of product flow control, my country’s crossborder e-commerce has won the close attention of global consumers with its rich product types, and it has also captured the countries with strong consumption of global online shopping capabilities. With the improvement of my country’s crossborder e-commerce overseas service system, my country Many traded commodities can be serviced overseas, thereby enhancing the market competitiveness of my country’s cross-border EB. (2) Promotion and marketing For cross-border e-commerce platforms, platform awareness and market reputation are the brand competitiveness of the platform. Brand competitiveness can be directly converted into platform visits, thereby promoting the platform’s commodity transactions. The platform’s promotion and marketing capabilities play an important role in it. On the one hand, the promotion of cross-border e-commerce platforms in foreign markets can provide consumers with comprehensive and timely information, thereby promoting consumers’ understanding of e-commerce platforms; on the other hand, the promotion and marketing capabilities of cross-border ecommerce platforms It can promote the rapid circulation of platform information
Innovation of E-commerce Business Model Based on Big Data
339
flow, so as to maintain initiative in the changing market competition. For crossborder e-commerce platforms, the marketing promotion system mainly includes search engines, social streaming media advertisements, and vertical community display advertisements. At the same time, precision marketing tools have become an important part of the promotion and marketing of cross-border e-commerce platforms. (3) Logistics operation With the development of Internet technology, the online service system of crossborder e-commerce platforms has gradually matured and penetrated into all aspects of the consumer’s shopping process, greatly improving the consumer’s shopping experience. Logistics distribution is an important link between the virtual shopping process and actual goods, and is also the most important part of the cross-border e-commerce service system. It plays a vital role in the sales of goods on the platform. 2.2 Driving Force of EB Model Innovation (1) Technical driving force In the process of business model innovation, any company needs to continuously improve its products and services with technological means. Technology driving force is the core force and key capability of business model innovation. In this innovation process, other business model elements of the enterprise need to cooperate with each other to further promote the business model innovation of the enterprise. Therefore, the business model innovation driven by technology is the core competence of the new technology-based enterprise. (2) Demand driving force The transformation of enterprise products and services triggered by market demand, customer demand, industry demand, etc., can be called demand-driven business model innovation. (3) Competitive driving force The homogenous competition brought by competitors that any company faces in its own field prompts companies to constantly change their business models to cope with the fierce market competition. 2.3 Business Model Innovation (1) Cross-border integrated management system In the whole process, the enterprise, the supervisory department and the customer are all on the same data platform, which enables the enterprise to prepare goods quickly for customs clearance, the delivery time is fast, the logistics and warehousing costs are greatly reduced, while avoiding the increase in costs such as product expiration. From a supervisory point of view, the data sharing between enterprises and supervisory departments and the unified interface can effectively prevent tax evasion and tax avoidance. The quality and source of goods are guaranteed. At the same time, it reduces the difficulty of supervision and improves the efficiency of customs clearance. From the perspective of customers, you can get hot-selling products with guaranteed quality in time, and the entire process is transparent and visible, effectively improving customer satisfaction and user experience.
340
W. Chen
(2) Intelligent supply chain logistics system Cross-border e-commerce companies need to discover potential profit opportunities in the market through in-depth mining of internal and external data and risk predictions, and use data and technology as the driving force for the development of the company’s core strategy. Companies are more likely to use BD to innovate new business models. (3) Brand effect of cross-border e-commerce In the era of BD, cross-border e-commerce companies need to use BD applications to dig deeper into the potential needs of customers. They can obtain data sources through a data-based platform, confirm user IDs and user profile information, and conduct customer analysis based on this. Different levels and multiple dimensions present customer portraits. Companies are more concerned about customers’ gender, age distribution, occupation, attention to information categories, consumption geographic distribution, hot-selling products and product sub-categories and other information, and at the same time pay more attention to the confirmation of data sources, through such multi-angle and multi-dimensional analysis The BD analysis of customer profile can obtain the basic data of business operations. Through a series of research and analysis on the basic data, the company can obtain more information about potential customers, competitors, and new market opportunities. By adopting precision marketing methods, we can obtain more efficient and accurate marketing solutions based on our own advantages, greatly increasing the conversion rate of marketing plans, and at the same time strengthening the active service mechanism with customers through BD analysis, which can better improve customers’ continuous purchase Willingness and customer satisfaction with the company’s products and services. 2.4 User Demand Weight Taking into account the subjectivity of people’s evaluation of the importance matrix and the complexity of the system, it may cause large deviations in the results. After the eigenvectors are calculated, they need to be checked for consistency. Therefore, the maximum eigenvalue λmax of the importance matrix C needs to be calculated, and the formula is as follows: 1 (CW )i m wi m
λmax =
(1)
i=1
Then, calculate the consistency index CI, the formula is as follows: CI =
λmax − m m−1
(2)
To calculate the random consistency ratio, the formula is as follows: CR =
CI RI
Among them, RI is the average random consistency index.
(3)
Innovation of E-commerce Business Model Based on Big Data
341
3 Experimental Research on EB Model Innovation Based on BD 3.1 Experimental Subjects and Methods In this experiment, consumers and an e-commerce company are used as experimental subjects. Through actual investigation methods, experimental research is conducted on consumer demand and the business model of e-commerce companies before and after the innovation of the e-commerce company’s business model to provide some basis for the innovation of EB models. 3.2 Data Collection Tasks are arranged by the group to conduct field surveys, conduct interviews with experimental subjects, conduct statistical analysis of data based on the results of the interviews, and obtain the final experimental data after collation.
4 Experimental Research and Analysis of EB Model Innovation Based on BD 4.1 Consumer Demand Analysis This experiment conducted an experimental study on the importance of consumers’ needs, analyzed the consumer’s consumption experience, logistics speed, after-sales service, affordable prices, rich categories, and convenience, and improved business models based on needs. Model innovation. The experimental results are shown in Table 1: Table 1. Consumer demand analysis Demand importance User experience
0.93
Logistics speed
0.87
After-sales service 0.82 Affordable
0.88
Rich categories
0.79
Convenience
0.81
As shown in Fig. 1: User experience is the biggest consumer demand, accounting for 0.93, followed by affordable price, accounting for 0.88. In short, the innovation of business models still needs to cater to the needs of consumers to improve, so as to achieve business Model innovation.
342
W. Chen
Demand importance
Consumer demand
Convenience
0.81
Rich categories
0.79
Affordable
0.88
After-sales service
0.82
Logistics speed
0.87
User experience
0.93 0.7
0.75
0.8
0.85
0.9
0.95
Percentage Fig. 1. Consumer demand analysis
4.2 Comparative Analysis Before and After Business Model Innovation This experiment uses an e-commerce platform as an example to compare various indicators before and after its business model innovation, specifically analyze the optimization part, and provide a reference for business model innovation. The experimental results are shown in Table 2: Table 2. Comparative analysis before and after business model innovation Before innovation After innovation Profit
74.29%
86.15%
Promotion marketing 82.37%
79.42%
Product
72.83%
89.76%
Logistics operation
85.64%
86.21%
As shown in Fig. 2, the profit after the business model innovation is 11.9% higher than that before the innovation. Prior to the innovation, more emphasis was placed on marketing lighter products, and after innovation, the profitability of products was lighter on marketing. The logistics operation was at a relatively average point, fluctuating up and down 1%. In summary, the innovation of business models still needs to focus on products. Once the product is good, the user experience will be improved, so as to achieve more profitability.
Innovation of E-commerce Business Model Based on Big Data
Before innovation
343
After innovation
100% 90% 80%
Percentage
70%
86.15%
79.42%
89.76%
86.21%
74.29%
82.37%
72.83%
85.64%
Profit
Promotion marketing
Product
Logistics operation
60% 50% 40% 30% 20% 10% 0%
Index category
Fig. 2. Comparative analysis before and after business model innovation
5 Conclusion In the context of the BD era, the internal and external environments of business model innovation have undergone great changes. Since considerable data can be analyzed, business models can obtain new exploration and development methods. This development is the future of business. The formulation of the model innovation route provides new methods and ideas, and innovative management decision-making. With the support of BD and its technology, this paper verifies the feasibility of realizing business model innovation based on data analysis. It has enabled modern enterprises to find theoretical basis and thinking directions in the way of BD application and business model innovation. Based on the research on the theoretical content of business model innovation, the concept of system innovation driving force is put forward, and the innovation of business model is proposed from various aspects, which will pave the way for the theoretical foundation and concept of the research on business model innovation later in this article. Acknowledgements. [fund project] this paper is one of the phased achievements of the 2020 Anhui Provincial Department of education teaching demonstration course “Introduction to e-commerce” and the 2020 Anhui International Business Vocational College “e-commerce comprehensive demonstration experimental training center” (project number: 2020sxzx01).
References 1. Antikainen, M., Valkokari, K.: A framework for sustainable circular business model innovation. Telev. New Media 6(7), 5–12 (2016) 2. Linder, M., Williander, M.: Circular business model innovation: inherent uncertainties. Bus. Strateg. Environ. 26(2), 182–196 (2017) 3. Franca, C.L., Broman, G., Robert, K.H., et al.: An approach to business model innovation and design for strategic sustainable development. J. Clean. Prod. 140(pt.1):155–166 (2017)
344
W. Chen
4. Christensen, C.M., Bartman, T., Van Bever, D.: The hard truth about business model innovation. Mit Sloan Manag. Rev. 58(1), 31–40 (2016) 5. Berends, H., Smits, A., Reymen, I., et al.: Learning while (re-)configuring: business model innovation processes in established firms. Strateg. Organ. 14(3), 181–219 (2016) 6. Marolt, M., Lenart, G., Maletic, D., et al.: Business model innovation: insights from a multiple case study of Slovenian SMEs. Organizacija 49(3), 161–171 (2016) 7. Vanhaverbeke, W.: Managing Open Innovation in SMEs: Business Model Innovation in SMEs (2017). https://doi.org/10.1017/9781139680981(3):52-79 8. Mcfarlane, D., Kbnick, P., Velu, C.: Preparing for industry 4.0: digital business model innovation in the food and beverage industry. Int. J. Mechatron. Manuf. Syst. 13(1), 59(2020) 9. Foss, N.J., Saebi, T.: Business models and business model innovation: between wicked and paradigmatic problems. Long Range Plan. 51(1), 9–21 (2017) 10. Hacklin, F., Bjorkdahl, J., Wallin, M.W.: Strategies for business model innovation: how firms reel in migrating value. Long Range Plan. 51(1), 82–110 (2017) 11. Taeuscher, K., Abdelkafi, N.: Visual tools for business model innovation: recommendations from a cognitive perspective. Creat. Innov. Manag. 26(2), 160–174 (2017) 12. Barth, H., Ulvenblad, P.O., Ulvenblad, P.: Towards a conceptual framework of sustainable business model innovation in the Agri-food sector: a systematic literature review. Sustainability 9(9), 1620 (2017)
Metacognitive Training Mode for English In-Depth Learning from the Perspective of Big Data Jingtai Li(B) School of Foreign Languages, Jiaying University, Meizhou 514015, Guangdong, China
Abstract. The era of big data not only changes the traditional teaching methods, but also provides opportunities for students’ meaningful learning, which makes the field of education pay more and more attention to the personalized learning support provided by big data for learners. In-depth learning is meaningful learning as opposed to shallow learning. This article attempts to apply the personalized adaptive learning mode to the in-depth learning of English. According to the characteristics of in-depth learning, combined with the convenient conditions provided by big data for teaching, analyzing the impact of big data applications on in-depth learning from the perspectives of students, teachers and teaching design, so as to propose corresponding strategies for the construction of metacognitive training mode for the research on English in-depth learning. Keywords: Metacognitive training · English · In-depth learning · Big Data
1 Introduction With the development of information technology, the data collection technology in the field of education is becoming more and more diversified, resulting in the increase in the number and types of education data, accompanied by the arrival of the era of education big data. [1] The so-called big data can be described from four dimensions: large capacity, fast speed, diversity and authenticity. Big data, with its four inherent characteristics, has a revolutionary impact on the theory and process of teaching and learning, especially the theory of in-depth learning, which has sprung up in recent years. This article focuses on the practical methods for teachers to use metacognitive strategies for instructional design, monitoring, and adjustment, sharing big data resources, mobilizing the two-way participation of teachers and students, and fundamentally cultivating students’ metacognitive strategies to improve the effect of in-depth learning of English. The basic characteristics of metacognition include knowing a large number of learning strategies, clearly understanding when, where and why these strategies are important, being able to choose strategies wisely, and monitoring strategies with reflective and planned ways.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 345–351, 2022. https://doi.org/10.1007/978-981-16-5857-0_44
346
J. Li
2 Concept of In-Depth Learning In-depth learning promotes active, critical and meaningful learning. Compared with shallow learning, in-depth learning requires learners to actively construct a personal knowledge system through in-depth processing of knowledge and information, understanding complex concepts under real conditions, eventually promoting the development of learners high-level thinking ability. [2] Specifically, in-depth learning means that on the basis of understanding learning, learners can critically learn new ideas and facts, integrate them into the original cognitive structure, connect many ideas, transfer existing knowledge to new situations, make decisions and solve problems. The characteristics of in-depth learning are as follows: (1) Metacognitive participation. Learners need metacognition to participate in in-depth learning. Which refers to thinking of thinking, cognition of cognition. In the process of learning, learners can consciously adapt or assimilate information, concepts, knowledge and propositions effectively, and can clearly understand the changes of their cognitive structure. Metacognition participation is of great significance to students’ understanding and mastering knowledge and promoting their in-depth learning. (2)The development of high-level thinking. The purpose of in-depth learning is to develop high-level thinking and realize meaningful learning. Its core idea embodies the concepts of criticism, understanding, integration, transfer, reflection and creation. (3)Value the inner connection of knowledge. [3] The criterion for the occurrence of in-depth learning is whether the internal connection of knowledge has been established. Establishing the internal connection of knowledge requires the integration of new and old knowledge, that is, conceptual interaction. In other words, in-depth learning is a meaningful learning that triggers conceptual changes, integrated understanding, and creative cognitive reorganization through the interaction between the learner and the environment. Personalized adaptive learning mode mainly considers application as the main line. It can adjust the learning plan of the next stage in time according to the feedback information of students’ learning ability at different levels and stages, so as to teach students in accordance with their aptitude.
3 Metacognitive Training Mode Big data can change the role of teachers, effectively promote the development of learners’ high-level thinking, and promote students’ in-depth learning. In the classroom under the integration and analysis of big data, the teacher is only the guide in the teaching, formulating the learning goals of the course, providing learning resources for learners, and encouraging learners to explore the learning content independently. [4] In this process, learners have more opportunities to interact with other partners and teachers. The collision of ideas extends the development of thinking. Learners begin to deeply analyze the problems encountered in learning, and critically look at the nature of learning, apply knowledge to actual life, and transfer knowledge. The transition of the teacher’s role from the leader to the guide promotes the development of high-level thinking such as application, analysis, synthesis, and evaluation, and promotes the learners to carry out in-depth learning.
Metacognitive Training Mode for English
347
Big data promotes changes in the role of students, and metacognition participates in students’ learning. In the digital age, the comprehensive collection and analysis of education data makes the traditional teaching environment and teaching mode suffer further impact. [5] Learners are no longer satisfied with the one-sided knowledge explained by teachers in the classroom. Through the mining and analysis of their own educational data, they can fully understand their own learning behavior, find the bad learning habits hidden behind the unconscious habits, and formulate the learning objectives within the scope of the nearest development zone. [6] Through the prediction of big data, students can find out the knowledge points that have not reached the learning goal, and adopt the self-adaptive resources, so that learners’ learning is no longer limited to the traditional classroom,. In teaching, teachers should use metacognitive strategies to analyze and reflect on their own work. The planning strategy of metacognitive strategy is used to determine the learning objectives, learning strategies, learning materials and learning effect evaluation of inquiry learning. This is the teaching management of teachers. Teachers should analyze the materials used, and design the important and difficult points of classroom teaching from the aspects of goals and strategies. With the help of big data, once the teaching evaluation is completed, teachers should make appropriate adjustments to the teaching. The adjustment process is back to the previous teaching design link. [7] Teachers need to clarify which aspects of the students they teach need to be strengthened and which parts can be weakened. In this way, the entire teaching process has become an organic whole, with all links closely integrated and gradually pushed upwards, which not only promotes students’ learning, but also promotes teachers’ teaching. Using big data mining and learning analysis technology, tracking data collection, and establishing a complete learner profile, it’s helpful to inspire learners’ metacognition to participate in the learning process, clarify the difficulty of learning content, and strengthen the internal connection of knowledge. [8] At the same time, teachers can also adopt a variety of teaching strategies to promote learners’ interest in learning and improve the development of learners’ high-level thinking ability, implementing multi-dimensional learning evaluation to improve learners’ sense of self-efficacy, and ultimately promoting learners’ in-depth learning in English study.
4 Problems of English In-Depth Learning in My Country First of all, English teaching mode ignores the expressive function of English language.Traditional English teaching mode attaches great importance to the expression of language, while neglects the function of language expression. The fixed teaching program can really make students master the basic knowledge of English, especially the knowledge of grammar, but it is easy to cause the phenomenon of dumb English. When students study abroad, oral communication is a big problem. They are very afraid to participate in seminars and other ways of oral expression. Therefore, the English teaching mode must be changed from emphasizing expression form to emphasizing expression function. Secondly, the traditional English teaching ignores the main role of students. For a long time, English teaching in my country has been based on teachers, ignoring the role
348
J. Li
of students as the main body. In fact, the English course is different from other courses. The English course is a very practical course. No matter how good the teacher is in the class, if the student cannot apply it in real life, this course will undoubtedly fail. Therefore, in order for students to truly learn English well, the leadership of the English class must be given to the students. [9] Both teachers and textbooks should adapt to the students’ learning. With students as the main body in class, teachers guide students in learning and answer their problems in learning. Only in this way can students overcome their passivity and mechanization and truly achieve the purpose of teaching. Thirdly, the traditional English teaching does not pay attention to the comparison of the differences between Chinese and Western cultures. In traditional English teaching, most teachers do not pay attention to the comparative teaching of the differences between Chinese and Western cultures, which makes many students learn with the Chinese thinking mode and habits, and cannot blend into the Western cultural atmosphere. In order to help students improve their English scores and learn pure and fluent English, it is necessary to pay attention to the problems in traditional English teaching and adopt a student-centered and practice-oriented teaching method.
5 The Strategies for Adopting Metacognitive Training Mode for English In-Depth Learning in My Country’s Colleges and Universities With the development of big data and learning analysis technology, personalized and adaptive learning mode can stimulate students’ intrinsic learning motivation, thus stimulating their learning interest and thirst for knowledge, and mobilizing their learning enthusiasm and initiative, which has become an effective way to cultivate students’ metacognitive ability. Teachers should have a comprehensive understanding and analysis of the connection of the whole semester, academic year, and even each grade, so as to achieve consistency. In addition, the application of big data materials is also particularly important in the teaching process. [10] There are many electronic resources, including professional websites, but students feel that there are very few things that are really useful, and because students lack the ability to screen and analyze the skills and learning content, it is difficult to do it entirely by conscious learning. In the teaching process, teachers need to play the role of a teacher on the one hand, and at the same time, they need to effectively monitor the whole teaching process on the other hand. In class, students’ feedback, questions, materials, teaching content and objectives should be well understood and appreciated, which should be paid attention to as an important part of teaching monitoring. If there is no teacher’s supervision and correction, the wrong memory will always exist, which will affect the follow-up learning. At the same time, teachers’ regular inspection can also help students form good learning habits and lay the foundation for self-monitoring consciousness. Not only should there be a teaching plan and monitoring, but also a complete use of metacognitive strategies to guide teaching. It is precisely because metacognition is cognitive cognition, which analyzes the problems in the cognitive process and conducts reflection and improvement. Therefore, it is appropriate to use metacognitive strategies
Metacognitive Training Mode for English
349
in teaching. Through the implementation of the plan to the monitoring process, the teacher will definitely find the difference between the plan and the actual situation. 5.1 Using Data Mining Algorithm to Build Learning Style Mode How to determine the learner’s learning style and then match the adaptive learning pattern according to the style is the primary consideration. A big data acquisition mode (i.e., learner database) can be established based on dynamic learning behavior data of learners, which is used to store learning behavior data marked with time stamps in the learning system. [11] Through the processing and analysis of learners’ behaviors, results and other data, the learning style can be discovered, and corresponding resources and modes can be recommended according to the style. 5.2 Construction of Cognitive Level Mode Based on Basic Response Theory The key point of cognitive level mode construction is to solve an adaptive learning diagnosis problem. Generally, cognitive objectives are divided into six levels: knowledge, understanding, application, analysis, synthesis and evaluation. The students’ cognitive ability is comprehensively examined through the process of doing the questions (score, difficulty level, number of answers and time spent on the questions, etc.). It is mainly to test the achievement of learning effectiveness, which can be comprehensively analyzed and tested in the form of small projects. If the learning goal is reached, it is recommended to terminate the learning, otherwise, it is recommended that students take further study. 5.3 Personalized Learning Path Optimization Recommendation The optimization of personalized learning path mainly involves the design and selection of autonomous learning strategies. Big data analysis can put forward corresponding strategies for the main links in the process of independent learning, namely, analyzing learning needs, generating personalized learning paths, self-building and joint construction of learning resources, and evaluating learning results. [12] There are mainly two methods to realize the analysis of learners’ needs. One is to use the big data analysis technology to intelligently analyze the existing data of learners on the system platform to determine their learning needs. The second is to generate learning paths based on the assessment of learners’ cognitive experience, preferences and effects in the cognitive stage. At the same time, the platform resources are selected according to the mechanism of click rate, download rate, utilization rate and learners’ evaluation of the resources. 5.4 Data Mining Technology is Used to Push the Learning Information of Relevant Preferences The information push part of the system mainly adopts the method of machine learning to conduct statistical and probability analysis on the information searched and used by students in the past, identify and predict the key points of information that students have been paying attention to, and analyze their interest/preference in information content.
350
J. Li
The acquisition method of learners’ preference feature value mainly consists of two steps: first, the static value of the preference feature is collected from learners’ registration information or uploading behavior, and then the original data is processed by direct or indirect matching method to obtain the static characteristic preference value of learners. Secondly, the behavior of learners in learning is mined, such as search keywords, browse type of web-page and web-page click rate, etc., to extract the dynamic preference characteristics of learners. Furthermore, the internal database of the information system can actively and timely push targeted and effective information to meet its personalized needs.
6 Conclusion In the process of teaching, teachers should let students master the dominant position, consciously train students’ metacognitive awareness, guide students to actively use metacognitive strategies for learning, and be able to monitor their own learning activities. In-depth learning is learning that actively constructs meaning. In the learning process, students need to have the ability to think independently, learn actively, and question rationally. They need to conduct self-reflection and effective knowledge transfer through effective communication with their peers. Obviously, it is easier to implement in-depth learning with higher metacognitive ability. The improvement of metacognitive ability is inseparable from the guidance of teachers and the efforts of students. In the learning process, students need to consciously carry out metacognitive training under the guidance of teachers, and constantly explore, practice and summarize, so that they can freely choose appropriate learning strategies, effectively use metacognitive monitoring to regulate cognitive activities, and realize in-depth learning. The study of adaptive personalized learning mode is inseparable from the application of big data technology in many aspects, such as big data analysis used for prediction and personalized intervention to realize data-supported learning and performance evaluation. [13] With the help of big data, various characteristics of learners are analyzed to understand the learning progress and quality of each student, accurately diagnose the learning needs of students, and predict the next behavior of students, so as to help teachers timely adjust teaching strategies, providing targeted personalized learning resources and scheduling arrangements for the best learning methods and suggestions. At the same time, big data analysis is carried out on the learning results of students to grasp the knowledge points of students in time, so as to identify the problems existing in the teaching design and improve them in time. Personalized adaptive learning is a long-term trend of education information technology, which has very big research space on the content and methods, such as intelligent node data real-time capture learning process to analyze the students’ cognitive level, the in-depth of the data mining in order to better realize the personalized prediction, using big data analysis to evaluate students’ intelligence, etc.
References 1. Abramson, L., Garber, J., Seligman, M.E.P.: Learned Helplessness in Humans: An Attributional Analysis. In: Garber, J., Seligman, M.E.P. (eds.) Human helplessness. Academic Press, NewYork (1980)
Metacognitive Training Mode for English
351
2. Baddeley, A.: Working memory and language processing. In: Dimitrova, B., Hylyenstam, K. (eds.) Language Processing and Simultaneous Interpreting: Interdisciplinary (2000) 3. Barnes, T., Boyer, K., Sharon, I., et al.: Preface for the special issue on AI-supported education in computer science. Int. J. Artif. Intell. Educ. 27(1), 1–4 (2017) 4. Bayne, S.: Teacherbot: interventions in automated teaching. Teach. High. Educ. 20(4), 455– 467 (2015) 5. Carrel, P.: Meta-cognition and EFL/ESL reading. Mod. Lang. J. 73(3), 121–133 (1989) 6. Davies, J., Brember, I.: The closing gap in attitudes between boys and girls: a five year longitudinal study. Educ. Psychol. 21, 103–115 (2001) 7. Goksel-Canbek, N., Mutlu, M.E.: On the track of artificial intelligence: learning with intelligent personal assistants. Int. J. Hum. Sci. 13(1), 593–601 (2016) 8. Holotescu, C.: MOOCBuddy: a chatbot for personalized learning with MOOCs. In: Iftene, A, Vanderdonckt, J. (eds.) Proceedings of the International Conference on Human-Computer Interaction - Ro CHI 2016. Bucharest: Matrix Rom (2016) 9. Leontiev, A.: A Psycholosy and the Language Learning Process. Oxford Pergamon Press, London (1990) 10. Timms, M.J.: Letting artificial intelligence in education out of the box: educational cobots and smart classrooms. Int. J. Artif. Intell. Educ. 2, 701–710 (2016) 11. Vail, A.K., Grafsgaard, J.F., Boyer, K.E., Wiebe, E.N., Lester, J.C.: Predicting learning from student affective response to tutor questions. In: Micarelli, A., Stamper, J., Panourgia, K. (eds.) ITS 2016. LNCS, vol. 9684, pp. 154–164. Springer, Cham (2016). https://doi.org/10. 1007/978-3-319-39583-8_15 12. Woolf, B.P., Lane, H.C., Chaudhri, V.K., et al.: AI grand challenges for education. AI Mag. 4, 61–84 (2013) 13. Zimmerman, B.J., Risemberg, E.: Self-regulatory dimensions of academic learning and motivation. In: Phye, G.D. (ed.) Handbook of academic learning. Academic Press, pp. 105–126 (1997)
Design and Realization of College Student Management System Under Big Data Technology Di Sun(B) School of Education, Wenhua College of Yunnan Arts University, Kunming 650304, Yunnan, China
Abstract. With the popularization of education informatization, colleges and universities generally have information management systems, which can manage school information, teacher information, student information, and performance information, and are equipped with special databases or data clusters to store this information. How to effectively use these data to extract and mine valuable information from these data, so as to provide schools and teachers with auxiliary decision-making, and truly improve the level and quality of school running has become an issue worthy of attention. The purpose of this paper is to design and implement the university management system based on big data technology. This paper firstly summarizes the basic theory of big data and derives the core technology of big data data mining and other core technologies, combined with the current university student management system in our country. The status quo, analysis of its problems and shortcomings, on this basis, combined with big data technology to design and analyze the college student management system. This article systematically expounds the design and use of the database, function modules and related technologies of the management system. And conduct research through research methods such as comparison and field investigation. Experimental research shows that compared with the traditional college student management system, the college student management system based on big data technology is more practical and more powerful. Keywords: Big data technology · Management system · Data mining · Design and implementation
1 Introduction The work of management information system (MIS) in our country is relatively late compared to other developed countries. Since 1975, a management information system with a single transaction processing capability gradually appeared. After 1980, our country gradually began to develop subsystems [1, 2]. Through these years of exploration, the development of our country’s management information system (MIS) has improved significantly earlier. Among the 1586 items exhibited in the National Computer Application © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 352–359, 2022. https://doi.org/10.1007/978-981-16-5857-0_45
Design and Realization of College Student Management System
353
Exhibition in 1986, 306 items were related to MIS. Shougang Management Information System won the first prize [3, 4]. Among the 10 first prizes at the 1987 exhibition, the General Port Information System and other systems accounted for 3 [5, 6]. At present, our country’s management information system (MIS) not only has a large number of comprehensive data processing systems, but also many systems have entered the system processing stage. The more prominent management information system (MIS) is the Shougang Management Information System, which is built on a large network, composed of 15 VS series management machines, 1 VAX-11/750, 2 PDP-11/24 process control upper computers and nearly 200 terminals. The coverage is 6 km long and 3 km wide [7, 8]. The purpose of this research is to improve the efficiency of student management, and to design the university management system under big data technology. By combining the advantages and improving the shortcomings of our country’s modern student management system, it is designed and implemented using big data technology [9, 10].
2 Research on College Student Management System Under Big Data Technology 2.1 Data Mining Data mining (Data Mining) is a knowledge discovery technology that seeks out the inherent characteristics and laws of data from a large amount of uncertain and vague data to discover potentially valuable information and provide a data basis for decision makers. 2.1.1 Classification Technology Classification refers to a description method of finding a specific category from a specific data set. Generally, each item or subset in the total set is assigned a unique attribute, and then the attribute is classified to construct a classification model. 2.1.2 Clustering Technology The data set in the database is divided into several subsets, so that each data subset has a strong correlation, which is called clustering, so that data with the same or similar characteristics form a new set. The function of clustering makes the data of the same kind have a very high degree of similarity, and there are great differences between different kinds of data. 2.1.3 Association Analysis Association rule analysis is generally used to discover certain associations and connections between data items in a data set, and discover potential hidden associations in the data and regularity between values.
354
D. Sun
2.2 Requirement Analysis of Student Management System 2.2.1 Analysis of Student Information Management There are two sub-modules, basic information management and extended information management, in the student information management module. Students can inquire about basic information, and if they find that the information is incorrect, they can apply to the counselor for modification. Students fill in extended information in the system. The extended information can be modified and added by students before the instructor has reviewed it. After the instructor reviews, students can only make inquiries. College users and students office users can modify and view student information. (1) Management of basic student information. Student basic information management, including maintaining the student’s name, student number, gender, age, class, ethnicity, nationality, ancestral home and other information. Instructor users, college users, and students office users can edit and modify basic student information. (2) View the basic information of students. Viewing the basic information of students means that student users can only view the basic information of the students themselves, and cannot modify them. (3) Student extended information management. Student extended information management, including the management of the student’s home address, mobile phone number, parent’s name, contact information, parent’s work, emergency contact information, learning experience, previous positions and other information. Counselor User College User Student Department User can review and edit the extended student information. 2.2.2 Analysis of Student File Management (1) Warehouse management: In September of each year, each college (department) collects the freshmen’s files and arranges them in the order of student numbers, and then transfers them to the student archives. The student archives exchanges bags and pastes barcodes on the new students’ archives, scans each barcode, and forms database. (2) Daily management: Load the student’s rewards and punishments, school year appraisal form and other materials into the student files, and rearrange the files of students who transfer to majors, leave school, drop out and other special circumstances. 2.3 Realization of Student Information Management System 2.3.1 User Login Verification User login is the user authentication interface of the student management system. The login interface uses the user authentication method described above to verify the user’s identity. The user enters the user and password in the user name and password text windows. The composition of this interface uses an Input (Text) control, an Input (Password) control and an Input (Button) control. If the user is authenticated, a session-level variable will be created, and then the page will jump directly to the student management page. 2.3.2 Professional Class Management The setting of majors and classes in the student management system is an important piece of information, and major codes and major names are used in the operations of
Design and Realization of College Student Management System
355
adding information and searching for information. The “Professional Class Management” module is an important system configuration function. There are two lists in the interface of this module, which display the information of the T_Specialty professional data table and the T_Class class data table respectively. 2.3.3 Dormitory Management The main function of dormitory management includes two parts: firstly, the management of basic dormitory information. In the initial situation, the system does not have any basic information about the dormitory. At this time, the distribution management of the dormitory cannot be completed. Click the “Create Dormitory” button in the interface to create a record in the T_Room data table, and each record corresponds to an actual dormitory. The information that needs to be entered to create a dormitory includes dormitory number, dormitory type (male, female) dormitory location, number of dormitory beds, dormitory administrator, dormitory telephone and other information. All the dormitories created will be displayed in the list. The second is the distribution management of the dormitory, which establishes a corresponding relationship between the students and the dormitory, and saves the distribution of the dormitory in the T_RoomCheckInLog data table. After selecting the dormitory to be allocated, the administrator selects the students assigned to this dormitory from the pop-up student list through the “Dormitory Allocation” function, and finally submits the information and saves it in the database. 2.4 Preprocessing of Academic Performance Suppose we divide the existing n-course data of m students into t classes, and the score of the j-th course of the i-th student is T. Then the average grade of the J course is xj =
1 m Xη η=1 m
(1)
The sample range is. Rj = Maxxη − Minxη
(2)
xη = xη − X j /RJ
(3)
The standardized result is
3 Experimental Research on College Student Management System Under Big Data Technology 3.1 Subjects (1) In order to make this experiment more scientific and effective, this experiment compares the university student management system under big data with the traditional university student management system. In this experiment, a questionnaire survey method
356
D. Sun
was used to conduct research by going to a local university and issuing questionnaires to students. (2) In order to further study the university management system based on big data technology designed in this experiment, this experiment also conducted face-toface interviews with teachers to discuss and analyze the university student management system under big data technology, to judge the feasibility of college student management system under big data technology. 3.2 Research Methods 3.2.1 Questionnaire Survey Method This article distributes targeted questionnaires to college students and conducts the survey in a fully enclosed manner. The purpose of this article is to promote the correct completion of the surveyed students. 3.2.2 Field Research Method This article goes deep into the school, conducts face-to-face interviews with teachers and students on the student management system, and organizes and records the collected data. These data not only provide theoretical references for the topic selection of this article, but also the final research results of this article. 3.2.3 Logic Analysis Method This research makes a logical and rigorous analysis of the core technology, requirements and database of the student management system, which makes the research results of this article more scientific and effective.
4 Experimental Analysis of College Student Management System under Big Data Technology 4.1 Comparative Analysis of Management Systems In order to conduct an in-depth study on the university student management system under big data technology, this time the traditional university management system and the university student management system based on big data technology are compared and analyzed. The data results obtained are shown in Table 1. It can be seen from Fig. 1 that compared with the traditional university student management system, the performance of the university student management system under big data technology is more excellent, especially the basic information management of students, its efficiency is 40 higher than that of the traditional university student information relationship. It fully illustrates the excellent performance of the university student management system under big data technology and the lag and shortcomings of our country’s current university student management system.
Design and Realization of College Student Management System
357
Table 1. Comparative analysis of management systems Class management
Information management
Dormitory management
Employment management
Big data technology
75%
85%
73%
66%
Traditional
42%
43%
50%
38%
Big data technology
Traditional
Percentages
100% 80% 60% 40% 20% 0% Class management
Information management
Dormitory Management
Employment management
Categprys Fig. 1. Comparative analysis of management systems
4.2 Performance Analysis of Big Data Technology College Student Management System In order to further study the college student management system under big data technology, this experiment collects data through interviews with teachers. The interviewed teachers are all over three years of teaching age to ensure the validity of the data. This experiment uses a ten-point scoring system, where 1 means disagree and 10 means agree. The data obtained is shown in Table 2. Table 2. Performance analysis of big data technology college student management system Safety Scalability Practicality Others Man
5
7
8
3
Woman 6
7
7
4
358
D. Sun
Woman
Man
performances
Others Practicality Scalability Safety 0
1
2
3
4
5
6
7
8
9
Satisfaction Fig. 2. Performance analysis of big data technology college student management system
It can be seen from Fig. 2 that most teachers agree with the practicability and scalability of the university student management system under big data technology, which reflects the feasibility of the university student management system under big data technology.
5 Conclusion At present, the function of the student achievement management system is relatively simple. It can only realize the query and simple sorting statistics of the student achievement. The design and development of a college achievement management system based on big data technology will improve the management level of the existing system and mine valuable information. It has certain guiding significance to improve the education and teaching management mode, and through continuous improvement of system function design, effectively realize the informationization and standardization of student management, reduce the workload of management personnel, reduce errors caused by human operation, and provide information management for college students bring more convenience.
References 1. Lee, K.H., Kim, J.Y., Seo, H.J.: College student adoption of smart learning management system. Res. J. Costume C. 27(5), 512–523 (2019) 2. Omodan, D.: Deconstructing social unrest as a response to redefine strained relationships between students and university authorities. Int. J. High. Educ. 9(6), 178–189 (2020) 3. Wang, S., Tang, Q.: Construction of guiding system for growth and development of college students under the student-oriented concept. Asian Agric. Res. 10(05), 93–95 (2018) 4. Lu, D.: Research on the professional management system of the football sports of college students. Agro Food Ind. Hi Tech 28(1), 1400–1404 (2017)
Design and Realization of College Student Management System
359
5. Frank, L.B.: “Free food on campus!”: using instructional technology to reduce university food waste and student food insecurity. J. Am. Coll. Health 10, 1–5 (2020) 6. Miguel, C., Domingues, J.P., Machado, P.B., et al.: Management system certification benefits: where do we stand? J. Ind. Eng. Manag. 10(3), 476–494 (2017) 7. Evans, R.W., Clark, P., Jia, N.: The caries management system: are preventive effects sustained postclinical trial? Commun. Dent. Oral Epidemiol. 44(2), 188–197 (2016) 8. Wang, Z., Chen, B., Wang, J., et al.: Decentralized energy management system for networked microgrids in grid-connected and islanded modes. IEEE Trans. Smart Grid 7(2), 1097–1105 (2016) 9. Visconti, P., Ferri, R., Pucciarelli, M., et al.: Development and characterization of a solarbased energy harvesting and power management system for a WSN node applied to optimized goods transport and storage. Int. J. Smart Sens. Intell. Syst. 9(4), 1637–1667 (2016) 10. Kuran, M.S., Viana, A.C., Iannone, L., et al.: A smart parking lot management system for scheduling the recharging of electric vehicles. IEEE Trans. Smart Grid 6(6), 2942–2953 (2017)
Exchange Rate Forecasting with Twitter Sentiment Analysis Technology Yinglan Zhao1 , Renhao Li2(B) , and Yiying Wang1 1 School of Economics, Sichuan University, No 24, First Ring Road, South Section,
Chengdu 610065, China 2 College of Computer Science, Sichuan University, No 24, First Ring Road, South Section,
Chengdu 610065, China [email protected]
Abstract. This paper attempts to introduce sentiment analysis technology into the task of exchange rate prediction, and studies the impact of sentiment factors on short-term fluctuation of exchange rate based on the theoretical support of behavioral finance. Firstly, the relevant network social media data generated during the new round of trade war between China and the United States were obtained, and the sentiment analysis technology was used to quantify the data to form the sentiment score sequence. Then the nonlinear model LSTM based on machine learning is used to model and predict the high-frequency exchange rate sequence. The empirical results show that the accuracy of the exchange rate forecasting model with the sentiment factors of public opinion is improved, which provides a new idea for the exchange rate forecasting method based on the technical analysis method. Keywords: Sentiment analysis · Exchange rate forecast · LSTM model
1 Introduction 1.1 Background of the Research The exchange rate is the price of one currency in relation to another. The commodity price of a country in the international trade market will be affected by the exchange rate, so the exchange rate has a direct impact on the international competitiveness of commodities. Since the 1970s, financial globalization has led to the integration of financial activities around the world, and the financial activities among countries have become more and more closely related, and the money market and financial market among countries have become increasingly closely linked. Changes in foreign exchange rates and foreign exchange markets not only affect the global financial environment, but also profoundly affect the level of economic development of relevant countries [1–5]. Since the beginning of the new century, China’s exchange rate system has undergone many reforms. A typical example is the floating exchange rate system with reference to © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 360–369, 2022. https://doi.org/10.1007/978-981-16-5857-0_46
Exchange Rate Forecasting with Twitter
361
a “basket” of currencies since July 2005. This policy marked the beginning of marketoriented reform of China’s exchange rate system. In response to the financial crisis in 2008, China adopted an exchange rate policy of pegging to the US dollar, which reduced the exchange risk of foreign exchange payments. Then in June 2010, the exchange rate reform was restarted in the hope of further enhancing the flexibility of the RMB exchange rate. All this means that the volatility of market exchange rates has increased and that the task of forecasting exchange rate trends has become more difficult [6–9]. In the face of the foreign exchange market, which is a complex multi-variable nonlinear system, in the early stage, the research work of the basic factor analysis was mainly based on the traditional exchange rate determination theory, and the exchange rate prediction was made according to the direction and relative strength of various factors affecting the exchange rate. However, the results of some empirical work show that this method is not accurate. Subsequently, with the continuous emergence of statistical theory, machine learning and other computer technologies, researchers began to use more complex nonlinear methods such as technical analysis and market analysis to solve the task of exchange rate forecasting. With the rapid development of computer hardware and Internet technology, the amount of information in social media on the network presents an explosive growth, which accelerates the development of sentiment analysis technology used to measure the emotion from unstructured information. Now that sentiment analysis technology has been widely used, it has achieved good results in financial related fields such as stock market trend prediction, but few attempts have been made to combine the task of exchange rate prediction. 1.2 Significance of the Research Compared with stock, spot and other transactions, the causes of foreign exchange rate fluctuations are more diverse and complex. In a freer and more market-oriented RMB and foreign exchange trading market, online public opinion, as an important carrier of information, has become an important vane for the operation of the foreign exchange market. In the foreign exchange market, people will express their subjective opinions on public events with investment preferences or related to their own interests on the Internet, which is a comprehensive expression of the interaction of various public attitudes and opinions. The wide range of influence and the large amount of information are important references for traders to make trading judgments and operations. From China’s exchange rate policy, we can see that the RMB exchange rate will be more market-oriented and the exchange rate volatility will be increased. Therefore, one of the important issues facing the central bank today is how to accurately estimate the fluctuation trend of exchange rate, so as to provide effective reference for the formulation of monetary policy in the future, in order to achieve the goal of resisting external economic interference, controlling inflation and stabilizing economic growth. In addition, from the perspective of transnational trade enterprises, an enterprise’s international competitiveness and risk control ability are reflected in the accuracy of the estimation of exchange rate trends. From the perspective of individual investors, they also hope to provide an effective basis for their portfolio to avoid risks by predicting the trend of exchange rate. Therefore, it is of great practical significance not only for national monetary policy
362
Y. Zhao et al.
authorities, but also for international trade enterprises and individual investors to accurately predict the trend of RMB exchange rate by constructing a reasonable forecasting model. On the other hand, the new round of trade friction between China and the United States since March 2018 provides a good opportunity for empirical study in this paper. In the ensuing period of time, the widespread attention of social media users to this topic provided sufficient research objects for sentiment analysis. Therefore, this paper introduces the sentiment analysis technology, which originated from the computer field and is now widely used in stock market index prediction, into the task of exchange rate prediction. Based on the traditional exchange rate determination theory and behavioral finance theory, this paper studies the influence of sentiment factors on the short-term fluctuation of exchange rate.
2 Data Processing and Sentiment Analysis Model 2.1 Data Preprocessing In order to study the impact of market sentiment factors on the change of target exchange rate, this paper chooses Twitter text data as the object of sentiment analysis, which has been proved to be reliable by a large number of relevant studies. In terms of data range, considering that the new round of Sino-US trade friction started when the US launched the Steel and Aluminum 232 Reports against the Chinese side on March 1, 2018, this paper respectively selected the one-year daily exchange rate data and Twitter text data from March 1, 2018 to February 28, 2019 as the source of model data. Considering the common keywords used in Sino-US trade topics, we stipulate that the required Twitter text data should meet the following conditions: (1) Include “China” or “Chinese” (2) Include “America” or “American” (3) Include “Trading” A total of 2,792 tweets met the criteria obtained through Python data crawling. Since the exchange rate market is closed on weekends and major festivals, we have completed the incomplete data of RMB exchange rate based on the data of the previous trading day. Thus, we have obtained the dependent variable of the model. The serial data are shown in Fig. 1: It can be seen that in this time span, the general trend of the exchange rate between US Dollar and Chinese RMB is to depreciate first and then appreciate. The early RMB depreciation process corresponds to the impact of the US tariff on China’s commodity exports, which is also caused by the direct investment, mergers and acquisitions of Chinese enterprises in the US, and the spread of risk aversion among investors in the financial market. However, the later rebound in the value of RMB may be a reflection of the effect of the Chinese government’s monetary policy and the gradual calming of market sentiment.
Exchange Rate Forecasting with Twitter
363
Fig. 1. The daily sequence of the closing price of USD/RMB
2.2 Sentiment Analysis Model Model Rules and Advantages In this paper, Hutto et al. (2014) ’s Vader (Valence Aware Dictionary and Sentiment Reasoner) is selected as the emotion analysis tool. It is a sentiment analysis model based on dictionary and rules and is designed specifically for the sentiment expressed in social media. Using a combination of qualitative and quantitative methods, the model first constructs and empirically verifies the gold standard list of lexical characteristics (and the associated measures of sentiment intensity), which are particularly suitable for the micro-blog type environment. The researchers then combined these lexical features with general rules that embody grammatical and syntactic conventions for expressing and emphasizing sentiment intensity. The model determines sentence sentiment based on the following grammatical rules: (1) Punctuation: Exclamation marks, for example, enhance the sentiment intensity of the sentence. (2) Case: If the sentence contains both uppercase and lowercase, the sentiment intensity of the all-caps words will be enhanced. (3) Adverb of degree: “Extremely good” is a much stronger positive sentiment than “good”. (4) Conjunctions: For example, there is a transitional conjunction “but” in the sentence, which reverses the sentiment polarity before and after “but”, and the general intention is to emphasize the semantic sentiment after “but”. (5) Negative words: For example, “isn’t”, will lead to subsequent reversals of sentiment polarity. The construction of VADER thesaurus is divided into two steps. One is to judge the sentiment polarity and intensity of more than 7000 commonly used sentiment words (including adjectives, nouns, adverbs, etc.) by manual labeling. Values ranging from −4 to +4 indicate extremely negative and extremely positive sentiments. Secondly, common emojis (such as “:)”) are considered to cope with the sentiment discrimination of nonstandard sentences in the network environment such as Twitter. Consider the sentiments
364
Y. Zhao et al.
of common abbreviations, such as WTF, LOL, etc. Commonly used slang, such as nah, giggly, etc. Compared with other sentiment analysis methods, VADER-based sentiment analysis has the following characteristics: it has a good effect on social media-style text, and it is easy to generalize to multiple fields; Without training data, it based on humanity’s gold standard sentiment dictionary; Information processing speed is fast, and it can be used together with processing data online. Sentiment Score Sequence Preprocessing With the help of the Python interface provided by VADER, we can quickly get the sentiment score of a specific text, which is a floating-point number within the range [–1, 1]. The closer to the right boundary, the more positive the sentiment of the text is; otherwise, the more negative the sentiment of the text is. To measure the daily sentiment fluctuation of Twitter users on the target topic, we went through the previously obtained Twitter text content, and converted each text into a sentiment score through VADER. Then we obtained the average sentiment score of the day based on the corresponding the date of the tweet. Finally, complete the sentiment score with a 0 value for the date with the missing Twitter data. After such preprocessing, we obtained the sequence of sentiment score related to exchange rate. The sequence data are shown in Fig. 2: sentiment_score 1.00 0.75 0.50 0.25 0.00 -0.25 -0.50 -0.75 -1.00 M3
M4
M5
M6
M7
M8
M9
M10
M11
M12
2018
M1
M2 2019
Fig. 2. Sentiment score sequence
It can be seen that the sentiment score sequence fluctuates greatly in this time span, indicating that the sentiments of users of social software are constantly changing with the occurrence of events related to the target topic during this period, which corresponds to the environmental background of frequent trade policy confrontation between the two sides in the Sino-US trade friction.
Exchange Rate Forecasting with Twitter
365
3 Exchange Rate Forecasting Model Based on LSTM 3.1 The Empirical Study of LSTM Exchange Rate Forecasting Long Short Term Memory Network (LSTM) is a special form of cyclic neural network. The structure of LSTM is good at solving long-term dependence problems. In essence, it is a supervised nonlinear optimization method. Nowadays, it has been widely used in fields such as natural language processing and speech recognition. The process of constructing the exchange rate forecasting model based on LSTM can be roughly divided into the following steps: (1) (2) (3) (4) (5) (6) (7)
Data preprocessing and normalization Build the lagged time series data set matching the model input and output Division of training set and test set Use the training set data to train and optimize the model Forecast with test set data Inverse normalization Draw the prediction results and calculate the indicators of the advantages and disadvantages of the model The overall process logic is shown in Fig. 3:
Fig. 3. Building process of exchange rate forecasting model based on LSTM
In the part of LSTM model construction, this paper uses Keras tool library which encapsulates the neural network framework TensorFlow to implement the model in the Python 3.5 environment. Considering the data volume and dimension size, this paper establishes a single-layer LSTM network for exchange rate prediction. The implementation structure of this network is shown in Fig. 4: Since exchange rate data are continuous values, in order to complete the regression task, the loss function in this model is the mean square error function, which is expressed as Formula (1): 2 n i=1 yi − yi MSE = (1) n
366
Y. Zhao et al.
Fig. 4. LSTM network implementation structure
Where n is the total number of samples, yi and yi are the ith true value and predicted value respectively. This function measures the mean of the square of the difference between the predicted value and the true value, and it only considers the average size of the error without considering its direction. However, because of the square, the predicted value with more deviation from the true value will be punished more severely than the predicted value with less deviation, and the mathematical property of the function is conducive to the calculation of gradient descent. In order to construct training and testing sets suitable for this model, the original exchange rate time series y needs to be converted into a data form corresponding to the lagged item (independent variable) and the current value (dependent variable), and yt is used to represent the exchange rate data at time t. Then the relationship between model input and output is shown in Formula (2): yt = f yt−1 , yt−2 , . . . , yt−p (2)
Where, the maximum lag step number is denoised as p, where function f is the set of the interaction of gate function, activation function and parameters in LSTM, which contains a strong nonlinear relationship. The optimization goal of the model is to minimize the MSE between yt and yt , while the optimizer is an algorithm to adjust the learning rate in the process of gradient descent. Its performance determines whether the machine can approach the global optimal solution of the model and the speed of training. Based on the advantages of algorithms such as automatic adjustment of learning rate, applicability to unstable objective function and high noise problems, this experiment selects ADAM algorithm as the optimizer.
Exchange Rate Forecasting with Twitter
367
Since the forgetting gate parameter in LSTM will control the influence degree of the lag item on the model output according to needs, we can take a larger sequence lag value P to contain more historical information as much as possible. Here, we take 5 for the test. On the other hand, the number of cells in the LSTM layer was constantly adjusted according to the regression results, and 10 was selected as the best choice on this data set through experiments. Normalization and inverse normalization are indispensable steps in the neural network model. Their purpose is to convert the target data range to the specified range without damaging the distribution characteristics of the data set. The normalization calculation formula is shown in (3):
xi =
xi − min(X) max(X) − min(X)
(3)
Where X represents the whole target data set, and xi and xi respectively represent the ith data in X before and after conversion. In this experiment, in order to unify the range with the sentiment score sequence to accelerate the training speed, exchange rate sequence y is also selected to be normalized to the range of [–1, 1]. As for the division of training set and test set, we still choose the first 65% sequences as the training set, the last 35% sequences as the test set, that is, the data from March 1, 2018 to October 23, 2018 were used as the training set for model training, and the data from October 24, 2018 to February 28, 2019 were used as the test set for prediction. Finally, the regression results of the training set of the model under the above conditions are shown in Fig. 5:
Fig. 5. Regression results of training set of LSTM exchange rate forecasting model
3.2 An Empirical Study of LSTM Exchange Rate Prediction with Sentiment Score To demonstrate the impact of sentiment factors on exchange rate fluctuations, we added the sentiment score sequence s and retrained the model. In order to ensure the reliability of the comparison results, we need to follow the control variable method to keep the loss
368
Y. Zhao et al.
function, optimization method, lag term, training set division and other conditions in the previous section unchanged. In the LSTM model with sentiment score, the condition (2) is changed to (4) based on the objective function formula (4): (4) yt = g yt−1 , st−1 , yt−2 , st−2 , . . . , yt−p , st−p
According to the above formula, the adjusted model takes sentiment score and exchange rate together as the historical sequence features to predict the future exchange rate. The regression results of the training set of the LSTM exchange rate forecasting model with sentiment score are shown in Fig. 6:
Fig. 6. Regression results of training set of LSTM exchange rate forecasting model with sentiment score added
The LSTM model with sentiment score added improved the goodness of fit of the measurement model by 13.1%. This empirical result does demonstrate the effectiveness of sentiment score data features in improving accuracy in the exchange rate prediction task, and also indirectly confirms the view that investors’ sentiment factors have a certain influence on exchange rate fluctuations.
4 Conclusion To sum up, this paper uses LSTM model to forecast the trend of exchange rate. This paper first proposes the assumption that investor sentiment in the exchange rate market will influence the exchange rate trend. According to the empirical results, it is explained from the perspective of asset allocation that the sentiment of investors towards the trade policies of the two countries reflect the trade tendency of the two countries to be loose or tight, and the corresponding policies will reduce or increase the risk expectation of investors to hold foreign currency assets. The shift in the tendency to hold foreign currency will lead to the appreciation or depreciation of the exchange rate, which will eventually lead to the relative trend of the exchange rate price in the market.
Exchange Rate Forecasting with Twitter
369
On the other hand, the construction of investor sentiment time series based on the target topic does help to improve the accuracy of the RMB exchange rate prediction model. At the same time, it also proves that the sentiment investors mentioned in behavioral finance still exist in the international exchange rate market, and investor sentiment does have an impact on the decision of exchange rate.
References 1. Hutto, C.J., Gilbert, E.: Vader: a parsimonious rule-based model for sentiment analysis of social media text. In: Eighth International AAAI Conference on Weblogs and Social Media (2014) 2. Moraes, R., Valiati, J.F., Neto, W.P.G.: Document-level sentiment classification: an empirical comparison between SVM and ANN. Expert Syst. Appl. 40, 621–33 (2013) 3. Simpson, M.W., Grossmann, A.: Can a relative purchasing power parity-based model outperform a random walk in forecasting short-term exchange rates. Int. J. Financ. Econ. 16(4), 375–392 (2011) 4. Medhat, W., Hassan, A., Korashy, H.: Sentiment analysis algorithms and applications: a survey. Ain Shams Eng. J. 5(4), 1093–1113 (2014) 5. Kang, H., Yoo, S.J., Han, D.: Senti-lexicon and improved Naive Bayes algorithms for sentiment analysis of restaurant reviews. Expert Syst. Appl. 39, 6000–6010 (2012) 6. Ortigosa-Hernández, J., Rodríguez, J.D., Alzate, L., Lucania, M., Inza, I., Lozano, J.A.: Approaching sentiment analysis by using semi-supervised learning of multi-dimensional classifiers. Neurocomputing 92, 98–115 (2012) 7. Kaufmann, J.M.: JMaxAlign: a maximum entropy parallel sentence alignment tool. In: Proceedings of COLING 2012: Demonstration Papers, Mumbai, pp. 277–88 (2012) 8. Li, Y.M., Li, T.Y.: Deriving market intelligence from microblogs. Decis. Support Syst. 55(1), 206–217 (2013) 9. Fiarni, C., Maharani, H., Pratama, R.: Sentiment analysis system for Indonesia online retail shop review using hierarchy Naive Bayes technique. In: Proceedings of the 2016 4th International Conference on Information and Communication Technology (ICoICT), pp. 1–6. IEEE (2016)
The Construction of Corpus Index in the Era of Big Data and Its Application Design in Japanese Teaching Kun Teng(B) College of Foreign Languages, Bohai University, Jinzhou, Liaoning, China
Abstract. The research logic in the era of big data has formed the new impact on linguistic research. Traditional text analysis methods can no longer meet the dual requirements for the large number of samples and the depth of data mining. Corpus is an analysis tool based on an electronic computer, which uses the mass storage function of the computer to carry language knowledge resources. The corpus in the era of big data has new features. As the content of the corpus continues to increase, it plays an increasingly important role in Japanese teaching. The core work of this paper is to study the construction of corpus indexes, including B+Tree index, clustered index and non-clustered index. When creating an index, there should be three appropriate ones, that is, on the appropriate table, on the appropriate column, and creating an appropriate number of indexes. Finally, it points out the application strategy of corpus in Japanese professional teaching. Keywords: Era of big data · Corpus index · Construction · Japanese major · Application
1 Introduction The overall goal of the training of Japanese major talents is to have strong Japanese skills, capable of foreign affairs, economics and trade, culture, education, and Japanese studies, etc., and international and compound senior Japanese talents with relevant professional needs. The teaching of Japanese major should change the teacher-centered teaching model in the past and focus on cultivating students’ learning and research abilities. All institutions of higher learning should make full use of modern information technology to improve a single teaching model that focuses on teachers’ teaching. The new teaching model should be supported by modern information technology, especially network technology, so that the teaching and learning of Japanese can be developed in the direction of individualization and independent learning without being restricted by time and place to a certain extent. Corpus is the specific application of information technology in Japanese teaching. The corpus is an analysis tool based on an electronic computer. It uses the computer’s mass storage function to carry language knowledge resources, and through professional processing, it forms the valuable educational resource. As the content of the corpus © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 370–378, 2022. https://doi.org/10.1007/978-981-16-5857-0_47
The Construction of Corpus Index in the Era of Big Data
371
continues to increase, the role it plays in Japanese teaching is also increasing [1]: the corpus becomes a useful supplement to Japanese textbooks, making up for the single content and poor timeliness of the materials; the corpus becomes an important source of teacher reference materials; the corpus becomes the best resource for students to prepare before class and complete the tasks after class; the corpus becomes a powerful tool for cultivating students’ autonomous learning and autonomous inquiry ability. Based on corpus-based Japanese teaching, teachers use retrieval techniques to find language examples, guide students to observe and analyze, master language usage from data, and summarize natural pragmatic laws. Through the application of the corpus, teachers no longer rely solely on teaching materials to impart knowledge to students, but design teaching content based on the corpus, guide students in language learning, grasp the learning status and feedback of students, adjust and optimize the teaching process in a timely and effective manner, and evaluate learning result.
2 Construction of Corpus Index The index is a structure for sorting the values of one or more columns in a database table. Creating an index can significantly improve system performance. First, create a unique index to ensure the uniqueness of each row of data in the database table. Second, speed up data retrieval, which is also the main reason for creating indexes. Third, speed up the connection between the table and the table, especially in terms of achieving data referential integrity. Fourth, when using grouping and sorting clauses for data retrieval, the time for grouping and sorting in the query can also be significantly reduced. Fifth, by using the index, you can use the optimization hider in the query process to improve the performance of the system. A. B+Tree Index The B-Tree index is based on a binary tree structure and includes three basic components: root node, branch node and leaf node. Among them, the root node is located at the top end of the index structure, the leaf node is located at the bottom end of the index structure, and the middle is the branch node. Leaf nodes contain entries that directly point to data rows in the table, and branch nodes contain entries that point to other branch nodes or leaf nodes in the index; a B-Tree index has only one root node, which is located at the top of the tree. Each node of the B-Tree index not only contains the key value of the data, but also the data value. The storage space of each page is limited. If the data is large, the number of keys that each node can store will be small; when the amount of data stored is large, the depth of the B-Tree will also be relatively large. Increase the number of disk I/Os during query, thereby affecting query efficiency. B+Tree is an optimization based on B-Tree, which is more suitable for implementing external storage index structure. InnoDB storage engine uses B+Tree to realize its index structure. In the B+Tree index, all data record nodes are stored in the same layer of leaf nodes in the order of key value size, instead of storing only key value information on the leaf nodes, this can increase the number of key values stored in each node, and reduce the height of B+Tree. B+Tree is different from B-Tree
372
K. Teng
as follows: non-leaf nodes only store key value information, there is a link pointer between all leaf nodes, and data records are stored in leaf nodes. The simple B+Tree structure is shown in Fig. 1.
600 1000 200 400 600 100
300 200
800 1000 500
400
Data
Data Data
700 600
Data Data
900 1000
800 Data
Data
Data Data
Data
Fig. 1. B+Tree structure
B. Clustered Index The clustered index sorts and stores data rows in a table according to the key values of the data rows. The index definition includes a clustered index column, and each table can only have one clustered index. Only when the table contains a clustered index, the data rows in the table are stored in sorted order. If the table has a clustered index, it is called a clustered table. The clustered index determines the storage order of table data. If the table does not have a clustered index, its data rows are stored in an unordered structure called a heap. The clustered index structure is shown in Fig. 2 [2].
Primary key index
Secondary index
Key+PK cols
Key+PK cols
Key+PK cols
Key+PK cols
Row
Row
Row
Row
Fig. 2. Clustered index structure
The Construction of Corpus Index in the Era of Big Data
373
The advantage of the clustered index is that the data access is faster, and the sort search and range search for the primary key are very fast, because the index and the data are stored in the same B+tree. The disadvantage is that the insertion speed is heavily dependent on the insertion order. Inserting in the order of the primary key is the fastest way; the cost of updating the primary key is high because it will cause the updated row to move; the secondary index access requires two index lookups, first, the primary key value is found, second, the row data is found based on the primary key value. C. Non-clustered Index Non-clustered index does not physically arrange data, that is, the logical order in the index is not equivalent to the physical order of the rows in the table. The index is a pointer to the position of the row in the table. These pointers are in order. These pointers can be used quickly locate data in the table. Because the non-clustered index data is stored out of order, the pointer contains the offset of the data row in the data page, that is, the pointer is composed of “data page + data row offset”. The non-clustered index structure is shown in Fig. 3.
Primary key index
Secondary index
Row
Row
Row
Row
Fig. 3. Non-clustered index structure
After the non-clustered index is established, when a specific record needs to be found based on this field, the database system will look up the pointer of this index based on the specific system table, and then look up the data based on the pointer. The data query first queries the index table. If the index table can be found in the cache at this time, an IO operation can be avoided. After the index value of the required data is found in the index table, the data position of the target data row can be determined to read the data. If a table contains a non-clustered index, but there is no clustered index, the new data will be inserted into the last data page, and then the non-clustered index will be updated. If it also contains a clustered index, find out where the new row will be. Then, both the clustered index and the non-clustered index are updated.
374
K. Teng
D. Applicable Situation of Clustered Index and Non-clustered Index When creating the index, there are three appropriate things to do, that is, create an appropriate number of indexes on the appropriate table and on the appropriate column [3]. Clustered index and non-clustered index are suitable for different situations. A brief summary is shown in Table 1.
3 Application on Corpus in Teaching of Japanese Major The corpus is widely used in the teaching of Japanese major. This article only conducts research from four aspects: vocabulary teaching, oral teaching, listening teaching and translation teaching. A. Corpus-based Vocabulary Teaching Vocabulary is the basic unit of language. Language expression and communication are realized through vocabulary. Without vocabulary, expression and communication cannot be achieved. Vocabulary learning is the core task of foreign language learning [4]. To learn Japanese well, you must master a certain number of Japanese vocabulary. The number of vocabulary and the proficiency of using vocabulary directly affect the language communication ability. John Sinclair proposed the theory of lexical grammar, that is, vocabulary and grammar are interdependent, and batches of real corpus screened through a large corpus show the co-selection relationship between vocabulary and grammar, thus showing the most typical or core use paradigm. The core concept is unify traditional grammar teaching and vocabulary teaching. With the Internet and big data as the background, the corpus contains a large amount of real corpus under certain principles, objectively showing the various characteristics of the target language, and because of its strong representativeness and convenient retrieval, it can guide students in the batch corpus to observe and summarize the collocation paradigms such as grammar, semantics and pragmatics of specific language phenomena to achieve the purpose of mastering the rules of use [5]. The corpus provides new methods and approaches for Japanese vocabulary teaching, which can effectively improve the teaching level. Assist students to immerse themselves in the original Japanese environment in multiple dimensions, systematically master vocabulary in a way closer to native speakers, and improve the accuracy of vocabulary use. First, the corpus-based discrimination teaching [6]. The corpus is all-encompassing, and the rich language materials are presented in a variety of styles. The corpus-based synonym identification method has changed from the traditional “empirical judgment method” to the “method based on experiment and statistics”, realizing qualitative analysis based on quantification and scientific, objective and rigorous way, then give highprecision synonymous discrimination results. Second, the corpus-based word collocation teaching. Through the corpus, the fixed expression of a vocabulary and the matching example sentences can be inquired at any time, and the rich teaching resources can be delivered to students in real time. Break the limitations of traditional fixed phrase and phrase teaching, and obtain comprehensive word collocation information in real time;
The Construction of Corpus Index in the Era of Big Data
375
Table 1. Applicable situation of clustered index and non-clustered index No
Operation
Clustered Index
Non-clustered Index
1
Columns are often grouped and sorted
Use
Use
2
Return data in a certain range
Use
Nonuse
3
One or rarely different value
Nonuse
Nonuse
4
Small number of different values
Use
Nonuse
5
Large number of different values
Nonuse
Use
6
Frequently updated columns
Nonuse
Use
7
Foreign key column
Use
Use
8
Primary key column
Use
Use
9
Frequently modify index columns
Nonuse
Use
by eliminating useless collocations, learn the real high-frequency co-occurring word collocations in life. Third, the corpus-based context using teaching. The corpus provides an intelligent method for automatically generating example sentences, which solves the problem of context creation in Japanese vocabulary teaching. To carry out new vocabulary teaching, you can choose example sentences that meet the students’ Japanese ability or daily life according to your needs; if it is a classic vocabulary teaching, you can select typical commonly used example sentences for teaching demonstration. B. Corpus-based Oral Teaching In order to cope with various Japanese exams, teachers mainly teach vocabulary and grammar knowledge. Few students can communicate fluently in Japanese. “Dumb” Japanese has become the biggest obstacle to learning. Oral Japanese is a skill, especially in the context of increasingly close international exchanges, learning practical oral knowledge can deal with practical problems encountered in work, business, and going abroad. Applying the corpus to Japanese oral teaching can make up for the single teaching content in the textbook, expand the language input range, increase the amount of language input, broaden the horizon, and have more exposure to the language generated in the real context, make the language output richer and more accurate and is close to real life [7]. The corpus can provide Japanese teachers with the most authentic and reliable language information. According to the actual needs of Japanese teaching, appropriately select from the corpus in a targeted manner, which not only enriches teaching materials, but also provides real corpus for students. The application of the corpus enables the Japanese oral teaching content to be built on the basis of real corpus, enhances students’ understanding and application of the language they learn in the real context, and overcomes the negative impact of “learning not used”. In oral Japanese teaching, the design of teaching activities is student-centered, exploring and discovering learning, using corpus to find and summarize language laws, not simply completing teaching tasks. Students can use the spoken language corpus to study independently or participate in group study, which is flexible and
376
K. Teng
has a wide range of applications. The corpus is integrated into oral English teaching, so that students become researchers and task-completers, while gaining joy from learning, they also improve their independent learning ability [8]. The use of the corpus in the teaching of oral Japanese increases the emotional experience of students, helps improve Japanese learning ability and comprehensive learning level, and adapts to the needs of cross-cultural communication. C. Corpus-baesd Listening Teaching Listening is an important skill for language learning. In many exams, listening is used as an important indicator to measure students’ Japanese ability. Traditional college listening teaching adopts the form of “speaking words, playing recordings, and checking the answers”. The listening materials lack authenticity and the teaching content is very limited. This teaching model only provides students with listening practice, and rarely focuses on students’ listening ability. The cultivation of students’ initiative and autonomy has not been fully mobilized, and there is no skill training, which is not conducive to the cultivation of students’ autonomous learning ability and inquiry ability [9]. The corpus-driven Japanese listening teaching provides real and rich teaching resources for listening teaching, enables listening teaching to advance with the times, improves the quality and level of Japanese listening teaching, and provides new ideas for listening teaching reform. In the process of Japanese listening teaching, some vocabulary is easy to be confused due to pronunciation, leading to comprehension errors and hindering the smooth progress of listening comprehension. In this regard, the key points should be highlighted in teaching and arouse students’ attention. For these vocabulary, in addition to relying on teaching experience, the best way is to build an interlanguage corpus to comprehensively and accurately count the difficult points [10]. Data-driven learning based on corpus has changed the teaching concept centered on teachers and textbooks, changed the role of teachers in controlling the listening training process, and guided students to actively participate in teaching activities, changed from teachercentered to student-centered teaching method. Teachers extract language blocks and their indexed content from the corpus to present them to students, shorten the distance between the teaching content and the real language, and provide opportunities to contact the real language, which is conducive to the internalization of language knowledge. Through the corpus search, the high-frequency and practical language blocks are determined, and the teaching focus is found. The teacher processes the corpus, selects a small number of language blocks and their co-occurring contexts, and builds a micro-text database to provide rich and operable real teaching materials for Japanese teaching. D. Corpus-based Translation Teaching The translation process is a process of selecting the most suitable language blocks and vocabulary according to the meaning of the original text and the communicative context and objects of the translation, and then connecting them in series. Translation is an important language skill and an important basis for measuring learners’ language ability. The current university Japanese teaching, translation teaching has not attracted enough attention, and the translation ability of students is still quite weak. Japanese translation has the following difficulties [11]: first, the perspective of vocabulary, there are a large number of Chinese characters in both Japanese and
The Construction of Corpus Index in the Era of Big Data
377
Chinese, which is convenient for Chinese people, but it is also easy to fall into the misunderstanding of Chinese thinking, especially if they are completely consistent or basic consistent Chinese characters can easily be translated directly literally. Second, the style and tone of the language. The reproduction of the artistic value of literary works not only needs to faithfully reproduce the original content, but also needs to maintain the original style to the greatest extent. Therefore, how to be consistent with the original style is also a difficult point for Japanese translation. From the study of translation commonality and translation characteristics in specific languages, to the study of the evolving translator’s style, from the analysis of translation operation norms to the excavation of translation practices and strategies, corpus is playing an active role in all fields of translation. It not only enriches the diversity of translation research work, but also provides unlimited possibilities for the new development of translation. Corpus translation teaching focuses on the recipient’s perception and experience of translation in the context of social culture and language communication, and enhances the learner’s cognitive ability of the relationship between the translation subject, translation object and recipient. The relationship between bilingual parallel corpus and translation teaching is the closest. Through retrieval software, dynamic context can be realized, the collocation of words or structures can be observed in a specific context, and the semantic characteristics in different contexts [12]. Students can form their own translation views and translation strategies in the parallel corpus. The corpus provides language materials for translation teaching, including language dictionaries, manual corpus and electronic corpus. When building a corpus, it is necessary to choose more targeted language materials. According to the characteristics of social development, real-time update of the data and vocabulary in the corpus to ensure the timeliness of the corpus, facilitate students to search for language materials, and change the translation work to a standardized spelling. Apply corpus to translation teaching work to cultivate students’ standardized translation ability [13]. When carrying out translation teaching work, we should choose standardized corpus materials, and use the corpus to complete daily translation teaching work. Based on the corpus as the core content, carry out translation practice teaching, improve the translation teaching structure system, strengthen the guidance of students’ autonomous learning ability, and cultivate students’ complete translation context.
References 1. Zhao, J.S., Zhang, F.: Research on innovation of college oral English teaching based on corpus. Teach. For. Reg. 36(3), 65–67 (2020) 2. Blog Garden, Clustered Index and Non-clustered Index. https://www.cnblogs.com/guoyu1/ p/13767894.html. Accessed 25 Nov 2020 3. Blog Garden, Introduction to Clustered Index and Non-clustered Index. https://www.cnblogs. com/Jessy/p/3543063.html. Accessed 25 Nov 2020 4. Yan, L.: English lexical teaching driven by corpus. Coll. Engl. Teach. Res. 18(6), 56–59 (2019) 5. Wang, Z.X., Liu, J., Lu, Q.: An empirical study on the effect of corpus-based comprehensive English vocabulary and grammar teaching. J. HUBEI Open Vocat. Coll. 33(17), 172–174 (2020)
378
K. Teng
6. Cai, L.: Improve the teaching effect of junior middle school English vocabulary by using corpus skillfully. Ref. Middle Sch. Teach. 12(28), 36–37 (2020) 7. Gong, H.: Research on the problems and suggestions of oral English teaching in senior high school based on corpus. J. Jiamusi Vocat. Inst. 2(15), 112–113 (2019) 8. Feng, X.T.: On the importance of English corpus in oral English teaching. J. Jiamusi Vocat. Inst. 35(12), 381–382 (2018) 9. Liu, L.Y., Meng, Z.K.: The design and development of web-based corpus in the listening instruction of foreign language. Mod. Educ. Technol. 21(7), 72–74 (2011) 10. Bai, X.G.: Strategies for compiling Japanese news listening textbooks based on corpus. Japanese Lang. Educ. Japanese Stud. 2(1), 26–29 (2013) 11. Li, H.: Some difficulties in Japanese-Chinese Translation. C. Educ. Inf. 44(3), 34–35 (2011) 12. Yu, L.L.: Innovative study of corpus applied in college translation courses. J. Chengdu Aeronaut. Polytech. 36(3), 37–39 (2020) 13. Tan, L.: On constructing the corpus-based teaching model in translation practice. J. Heihe Univ. 11(8), 98–100 (2020)
Prediction of Urban Innovation Based on Machine Learning Method Zhengguang Fu(B) Department of Economics, Shanghai University, Shanghai, China [email protected]
Abstract. Based on regression tree and support vector regression in machine learning, this paper predicts the innovation output of Chinese cities. The results show that the prediction error of regression tree in the test set is 0.28, while the prediction error of support vector regression in the test set is 0.33. Therefore, the prediction effect of regression tree is better than that of support vector regression. In addition, we find that GDP, human capital level and foreign investment all have a positive impact on urban innovation output. The GDP index which represents the economic base has the greatest impact on the innovation output of a city. It means that innovative output depends on a sound economic foundation. Keywords: Urban innovation · Machine learning · Prediction accuracy
1 Introduction Innovation is a powerful driving force for economic and social development. In the world’s “Fourth Industrial Revolution”, the dominant position of advanced knowledge and technology is becoming increasingly prominent, gradually replacing the dominant position of traditional material capital, and bringing about technological innovation in social production field again and again. It is the key strategic resources to drive a country or region’s sustained economic growth [1]. As one of the important regional concepts, city carries the important space of innovation elements such as talents, enterprises, capital and knowledge information. The good innovation atmosphere in the city attracts highlevel talents to gather, and the increase of high-level technical talents in turn accelerates the speed of technological innovation, thus giving a strong impetus to the sustainable growth of the social economy [2]. Therefore, it is particularly important to identify the causal factors affecting urban innovation and forecast the urban innovation output. In this paper, the regression decision tree and support vector regression methods in machine learning are used to forecast the innovation output of 274 prefecture-level cities in China, and the prediction accuracy of the two methods is compared.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 379–385, 2022. https://doi.org/10.1007/978-981-16-5857-0_48
380
Z. Fu
2 The Model 2.1 Regression Tree The basis of regression tree is decision tree. Before we introduce regression tree, we need to introduce decision tree first. Decision tree is a kind of wide application of data analysis technology [3], first of all, it is a tree model, it can realize data features to the mapping of the target, the in the tree every branch said one of characteristics of the data classification, decision tree leaf nodes said went down to the target value of the current node, the current generation of decision tree algorithm ID3, C4.5 [4], etc. The division of the decision tree is recursive, about the end of the recursive timing you have many choices, such as the simplest into a tall tree can be used only one sample, but this approach can lead to learning through fitting, fitting mean the decision tree is more suited to study the characteristics of the data, so that the discriminant of unknown data is flawed. At present, the commonly used principle is that when the number of samples of the subtree is less than a threshold or the purity improvement of the subtree is less than a threshold, the generation of the decision tree shall be stopped, and the classification with the most occurrence times in the subtree shall be regarded as the classification of the subtree. The main characteristics of decision tree are as follows: First, decision tree can be easily visualized; Secondly, decision tree is highly interpretable, so we can learn and interpret it conveniently according to its structure. Third, the calculation efficiency is fast. Based on the above characteristics, decision tree has gradually become one of the most widely used classifiers in the field of data mining and machine learning. The training samples can be used to construct the decision tree conveniently and efficiently and solve the practical problems. Regression tree is a kind of decision tree [5], which uses Gini coefficient to replace information gain rate for data feature selection. In addition, classification and regression tree can deal with the situation where the data target value is continuous. The specific method is to calculate the average value of the target value in the leaf node as the target value of the current node. The following is the basic algorithm of regression tree: (1) Build the tree First, find the best feature to be segmented; Second, if it can no longer be shard, the node is saved as a leaf node and returned. Then, the dataset is divided into left and right subtrees according to the optimal segmentation features. If the eigenvalue is greater than a given value, it belongs to the left subtree, and the inverse subtree belongs to the right subtree. Finally, the left and right subtrees are constructed respectively. (2) The best feature to be segmented Firstly, we traverse all the eigenvalues of the feature and calculate the error of the data set segmentation according to the eigenvalues. Then, the feature with the smallest error and its corresponding value are selected as the best segmentation feature and returned. (3) Prediction based on regression tree
Prediction of Urban Innovation Based on Machine Learning Method
381
First, determine whether the current regression tree is a leaf node, if so, make a prediction, if not, execute the next step; Then, compare the eigenvalues of the corresponding features of the data set with the current regression tree. If the eigenvalues of the data set are large, then judge whether the left subtree of the current regression tree is a leaf node. If it is not a leaf node, then make prediction based on the regression tree; if it is a leaf node, make prediction. Correspondingly, the right subtree makes the same judgment. 2.2 Support Vector Regression Support vector regression is the extension of support vector machine on regression problem [6]. The basic idea of SVR is to transplant the hinge loss function of support vector machine to the regression problem [7]. Let the regression function be:
f (x) = β0 + x β
(1)
This function predicts the continuous response variable Y. The objective function of SVR is: 1 min β β + C lε yi − f (xi ) β,β0 2 n
(2)
i=1
At this point, margin zi ≡ yi − f (xi ) is residual, and lε is ε-insensitive loss function, which is defined as: |zi | < ε 0 (3) lε = |zi | − ε |zi | > ε Where ε is the regulating parameter, that is to say, if the absolute value of residual is less than or equal to ε, the loss is 0; Therefore, the loss function is insensitive to the residual in an interval band of 2ε width, so it is named ε-insensitive loss function. If residual is |zi | >ε, the loss of |zi |-ε,a one-to-one linear growth. Since SVR uses hinge loss function similar to support vector machine, which can be regarded as a combination of two symmetric hinges, SVR is more suitable for data with many variables [8]. In addition, the SVR can also use the kernel technique, the eigenvector X_I into φ(X_I), which gets the result of nonlinear regression, and the εinsensitive loss function is a linear function, so for the extreme value is not sensitive, more robust. These are the main reasons to use SVR. For solving the SVR problem, we can still introduce the relaxation variable and solve it by Lagrange function. The great thing about support vector machines is that they work with a lot of variables. Intuitively, when the dimension p of the feature vector is large, the data will be scattered, making it easier to separate the sample points scattered in the p-dimensional space by hyperplane. This makes SVM widely used in text analysis. Second, SVM is more efficient in data storage, because SVM only needs part of the data in the predictive test [9]. In addition, because of the kernel techniques available, SVM is universal and suitable for highly nonlinear decision boundaries. The disadvantages of support vector machines include that they are sometimes sensitive to the parameters in the kernel function. In addition, SVM may perform poorly for truly high-dimensional data. At this time,
382
Z. Fu
the latitude of the feature space is far beyond the sample size, so there are only relatively few support vectors to determine the separation hyperplane of higher dimensions, which makes the generalization ability of the model worse. In addition, because SVM uses separation hyperplane to classify, it cannot be interpreted from the perspective of probability. For example, you can’t calculate the posterior probability of classifying the observed values.
3 Variables 3.1 Dependent Variable The dependent variable is city innovation output, different scholars from different dimensions of urban output measurement, taken together, the city of patent application number (P) as the combination of theory with practice, exclusive, and apply for units including enterprises, individuals and government organizations, universities, is the most commonly used and effective measure of innovative output indicators. 3.2 Predictor Variable The following control variables were selected in this paper: (1) Economic development level, measured by per capita GDP (lngdp) and gdp, and logarithmic processing; (2) Human capital level, measured by the number of scientific research, technical services and geological exploration practitioners (lnhum), and logarithmic treatment; (3) Industrial structure, measured by tertiary industry proportion. (4) The utilization level of foreign capital is measured by the regional direct utilization of foreign capital finance (lnfdi) and the number of regional direct utilization of foreign capital contracts (lnfic), and logarithmic processing is taken. Table 1 shows the descriptive statistics of each variable. Table 1. Descriptive statistics of each variable Variable Min Max
Mean
S.D
P
0
65880 682.60 2873.56
lnpgdp
7.27 12.58
9.87
0.91
tertiary
8.5
80.23
36.79
8.65
lnhum
0
13.87
9.79
2.33
lnfdi
0
14.94
9.18
2.28
lnfic
0
8.70
3.28
1.62
4 Result We used the variables mentioned above to fit the model.70% data volume is used as training data and 30% data is used as test data.
Prediction of Urban Innovation Based on Machine Learning Method
383
4.1 Regression Tree According to the Fig. 1, It can be found that when the number of decision trees B > 100, the out-of-bag error has basically stabilized, and further expansion of B will not increase the out-of-bag error, that is to say, increasing the number of decision trees will not lead to overfitting of the model.
Fig. 1. Bagging OOB Errors
Table 2 shows the importance of each variable. It can be found that GDP, human capital and foreign investment amount are the most important to predict urban innovation output. The above results indicate that a good economic foundation and the number of scientific and technological personnel are particularly important for innovation output, which can promote the improvement of urban innovation level. Secondly, the introduction of foreign capital shows more technology spillover effect, which is conducive to the improvement of urban innovation output. Table 2. The importance of each variable MSE Node Purity gdp
23.50 861.90
lnpgdp 15.62 261.57 tertiary 12.27 287.70 lnhum
18.14 203.55
lnfdi
18.06 535.73
lnfic
11.59 272.30
Figure 2 shows the partial dependence graph of the three variables. It can be found that the three variables have a positive impact on urban innovation output, but it is not completely linear, especially in the tail area of the three variables. Finally, the mean square error in the test set based on the above model is 0.281, which means the test error is small and the prediction effect of the model is very good.
384
Z. Fu
Fig. 2. Partial dependence of the three variables
4.2 SVR We performed support vector regression on the training data and found that 630 support vectors were selected by the model. For the penalty parameter C, the radial basis parameter γ was used for 10 folds cross-validation in 10,000 grids to select the optimal parameter. According to the results, the optimal penalty parameter C was 9.6 and the optimal radial basis parameter γ was 0.1. Finally, we calculate the test error of the model on the test set, and the test error is 0.33, it can be found that the prediction effect of support vector regression is not as strong as that of regression tree.
5 Conclusions and Enlightenment This paper uses regression tree and support vector regression in machine learning to predict the innovation output of 274 Chinese prefecture-level cities. Through comparison, it can be found that the prediction effect of regression tree is better than that of support vector regression. In addition, this paper finds that urban GDP, human capital and FDI have the greatest influence on urban innovation output, and their influence on urban innovation output is positive. The increase of GDP, the increase of human capital and the inflow of FDI can all enhance the level of urban innovation, and they are also important variables to predict urban innovation output. Based on the above conclusions, this paper has the following implications: First of all, since urban economy has a significant impact on urban innovation, it is necessary to actively promote economic development, support economic transformation, and increase digital infrastructure so as to give full play to the enabling effect of digital technology in boosting technological innovation activities and urban economy. Attention to different economic levels may widen the gap in innovation capacity between cities, and give play to the decisive role of the market in resource allocation. The market should promote the rational flow and efficient agglomeration of various innovative elements between urban and rural areas, between cities and between regions, and break the
Prediction of Urban Innovation Based on Machine Learning Method
385
way of administrative instructions for resource planning. Secondly, vigorously develop the urban education level, improve the enrollment rate and ascending rate of citizens, so as to further improve the level of urban human capital, improve the innovation efficiency, and lay a solid foundation for the development of innovative cities. Thirdly, it is necessary to actively introduce foreign capital and give play to the technology spillover effect of foreign capital, which is one of the important factors for the formation of regional innovation ability of FDI. The growth of FDI inflow is of great significance for technological progress [10]. Learn knowledge and technology that the city does not have from abroad, improve the technical level of the city, actively integrate foreign knowledge and local knowledge, so as to burst out the spark of innovation. Finally, we need to accelerate the transformation of innovation achievements and ensure that these achievements are translated into economic development and benefit the people’s lives. The ultimate goal of developing urban innovation ability is to improve the level of economic development and the improvement of people’s lives, and accelerating the transformation of innovation achievements is the only way to go [11]. For cities with slow marketization process, generally speaking, they are cities with low development level. It is necessary to accelerate the marketization process, provide a good market environment for technological innovation of enterprises, and improve the innovation ability of cities.
References 1. Caragliu, A., Del Bo, C.F.: Smart innovative cities: the impact of Smart City policies on urban innovation. Technol. Forecast. Soc. Chang. 142, 373–383 (2019) 2. Angelidou, M., Psaltoglou, A.: An empirical investigation of social innovation initiatives for sustainable urban development. Sustain. Urban Areas 33, 113–125 (2017) 3. Loh, W.Y.: Classification and regression trees. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 1(1), 14–23 (2011) 4. Sutton, C.D.: Classification and regression trees, bagging, and boosting. Handbook Stat. 24, 303–329 (2005) 5. Su, X., Wang, M., Fan, J.: Maximum likelihood regression trees. J. Comput. Graph. Stat. 13(3), 586–598 (2004) 6. Noble, W.S.: What is a support vector machine? Nat. Biotechnol. 24(12), 1565–1567 (2006) 7. Mangasarian, O.L., Musicant, D.R.: Robust linear and support vector regression. IEEE Trans. Pattern Anal. Mach. Intell. 22(9), 950–955 (2000) 8. Yang, H., Huang, K., King, I., et al.: Localized support vector regression for time series prediction. Neurocomputing 72(10–12), 2659–2669 (2009) 9. Byvatov, E., Schneider, G.: Support vector machine applications in bioinformatics. Appl. Bioinform. 2(2), 67–77 (2003) 10. Cheung, K., Ping, L.: Spillover effects of FDI on innovation in China: evidence from the provincial data. China Econ. Rev. 15(1), 25–44 (2004) 11. Lyu, L., Sun, F., Huang, R.: Innovation-based urbanization: evidence from 270 cities at the prefecture level or above in China. J. Geog. Sci. 29(8), 1283–1299 (2019). https://doi.org/10. 1007/s11442-019-1659-1
Empirical Research on Population Policy and Economic Growth Based on Big Data Analysis Technology Jin Wang(B) Shanghai University, Shanghai, China
Abstract. At present, the demographic dividend is mainly manifested in a higher proportion of the labor force and labor allocation efficiency, but the demographic dividend is not positively correlated and continuous. Based on big data analysis technology, as the birth rate of Zhejiang Province is declining year by year and the population aging rapidly intensifies, the demographic dividend effect may also gradually disappear. Therefore, to improve the quality of the labor force and realize the transformation of the economic development mode, it is necessary to find the alternative driving force for the demographic dividend effect as a breakthrough, and it is also an important measure to strengthen the sustainable development of my country’s economy. This article uses big data analysis technology to analyze the relationship between Zhejiang’s population and economic development, and uses panel data analysis to establish a model to empirically study the relationship between Zhejiang’s population policy, labor force structure and economic growth under big data analysis technology, and finally combine Practical investigations put forward feasibility suggestions. Keywords: Big data analysis technology · Labor structure · Economic growth · Demographic dividend
1 Introduction Since the reform and opening up, Zhejiang Province has maintained an economic growth trend for more than 30 years. The average growth rate of GDP growth in the past ten years is 9.7%. However, based on big data analysis technology, the in-depth analysis of the reasons for the rapid economic growth from the perspective of population is only a matter of recent years, and population has once become the focus of public attention. In Zhejiang Province’s economic growth in the past few decades, the impact of material capital accounted for 27%, the number of labor forces accounted for 23%, the quality of the labor force accounted for 22%, the impact of population mobility or factor allocation accounted for 21%, and other factors accounted for 21%. 5%. Through research under big data analysis technology, it is found that Zhejiang’s high economic growth is mainly due to the increase in capital investment, the increase in labor participation rate and the improvement of labor quality, rather than technological progress [1]. The high labor force © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 386–393, 2022. https://doi.org/10.1007/978-981-16-5857-0_49
Empirical Research on Population Policy and Economic Growth
387
participation rate and high allocation efficiency are important driving forces for economic growth in Zhejiang Province. The role of labor force in promoting economic growth is population. This article uses big data analysis technology to analyze the relationship between Zhejiang’s population and economic development, uses panel data analysis methods, establishes a model, and uses big data analysis technology to empirically study the relationship between Zhejiang’s population policy, labor force structure and economic growth, and finally combines Practical investigations put forward feasibility suggestions.
2 Indicators and Data Selection Based on big data analysis technology, in order to study the relationship between economic growth, labor structure and population policy in this article, this article examines the impact indicators of these three factors for analysis: 2.1 Evaluation Indicators of Economic Growth Based on big data analysis technology, this paper chooses Zhejiang Province’s GDP as the output indicator of economic growth. The GDP of each year is calculated at a comparable price based on 1998 and revised to actual GDP. The formula is: Real GDP = Nominal GDP/Gross Product Index*100. The data of GDP and GDP price index are derived from the 2015 “Statistical Bulletin of Zhejiang Province” and historical statistical yearbooks. 2.2 Evaluation Indicators of Population Policy Based on big data analysis technology, population policy mainly refers to the government’s population planning, and its main impact is the impact on capital stock. Including material capital and human capital [2]. The index of physical capital is selected as fixed asset investment, and calculated using the “perpetual inventory method” at a depreciation rate of 10%. The relevant data is shown in Table 1. The calculation formula for physical capital is: Kt = (1 − αt )Kt−1 + It
(1-1)
In the above formula, K t and It respectively represent the capital stock in period t and the new investment in the current period, and the depreciation rate expressed by it. Because there is no recognized human capital stock algorithm in the economics circle, based on big data analysis technology, and out of the representativeness and feasibility of indicators, this article selects the “Education Years Method” to estimate the human capital stock of Zhejiang Province [3]. Since there is a part of the total population unemployed, but the number and quality of employees play a major role in economic growth, so this article uses the number of employees in Zhejiang Province over the years and the education status of employees to estimate the human capital stock of Zhejiang Province. The basic formula is: HEh hi (1-2) Ht =
388
J. Wang Table 1. Physical capital stock during 2008–2013
Years
Investment in fixed assets
Fixed asset investment grid index
Actual investment in Total stock of fixed assets physical capital
2008
756.01
105.5
716.60
1296.31
2009
874.53
102.2
855.70
1500.64
2010
1024.87
104.1
984.50
1754.63
2011
1310.38
102.8
1274.68
2160.74
2012
1735.79
106.7
1626.79
2774.01
2013
2479.60
101.5
2442.96
3907.07
2014
3378.10
103.5
3263.86
5462.52
2015
4180.24
104.7
3992.59
6930.07
2016
5040.53
102.1
4936.86
8530.19
2017
6407.20
100.4
6381.67
10824.84
2018
7059.36
102.5
6983.26
11529.26
2019
7369.24
102.1
7025.11
11253.78
Among them, H t is the total stock of human capital in year t, HE ti is the number of labor force at the i-th academic level in year t, and h i is the number of years of education at the i-th level. This method implies a hypothesis: the formation of human capital is mainly through formal education in schools. The total stock of human capital in Zhejiang Province is: H = 16 ∗ H1 + 12 ∗ H2 + 9 ∗ H3 + 6 ∗ H4 + 2H5
(1-3)
2.3 Evaluation Indicators of Labor Force Structure The evaluation index of labor structure is mainly determined by the education level and skill level of human capital. The level of human capital in Zhejiang Province is: h=
H Total number of employeesm
(1-4)
Taking into account that although illiterate and semi-illiterate workers have no education or have a low level of education, their experience, knowledge and skills acquired in actual production activities have also formed a certain amount of human capital, so the number of years of education is set as 2 years. The relevant data is shown in Table 2.
Empirical Research on Population Policy and Economic Growth
389
Table 2. The stock and level of human capital from 2008 to 2019 Years
Number of employees
Human capital stock
Human capital level
2008
1500.59
1094.81
7.30
2009
1510.85
1147.38
7.58
2010
1520.46
1171.80
7.87
2011
1391.36
1033.17
7.35
2012
1401.36
986.02
7.00
2013
1414.76
1001.76
7.19
2014
1446.34
1074.20
7.46
2015
1488.63
1116.39
7.49
2016
1499.56
1226.90
8.19
2017
1500.26
1350.56
9.06
2018
1491.59
1326.63
9.07
2019
1504.97
1353.26
8.87
3 Model Construction The American mathematician C.W. Cobb and the economist P.H. Dauglas proposed a joint study and proposed the Cobb-Douglas production function in the 1930s. This formula introduces human capital as an explanatory variable into the equation, revealing the role of human capital as an endogenous variable in the economy. The function form is as follows: Y(t) = A(t)Kα H1−α
(2-1)
The C-D function has become the basic model of modern economic theory with its high generalization of economic output mode and sufficient theoretical support. Afterwards, many models proposed by economists are improved on the basis of this model. Chinese scholar Wang Jinying (2001) improved the C-D model, which is called the Cobb-Douglas production function of labor input, also known as the effective labor model. The basic function form is: Y(t) = A(t)Kαt Ht β
(2-2)
In the above formula, Y(t) represents the level of economic output in year t, A(t) represents total factor productivity, K(t) is the stock of physical capital, and H(t) is the stock of human capital. α is the marginal output elasticity coefficient of material capital input, and β is the marginal output elasticity coefficient of human capital input, that is, the share of material capital and human capital in economic output. For empirical needs, first take the logarithm of both sides of the model to get the regression equation: The growth rate of material capital and the growth rate of human capital respectively represent the contribution share of material capital and the contribution share of human
390
J. Wang
capital, and the quotient of their respective divisions is their contribution to economic growth [4]. The advantage of the above-mentioned effective labor model is that it considers the endogenous of human capital, but the disadvantage is that it ignores the externality of human capital, and Lucas’s human capital externality model is the perfection of the effective labor model. The functional form of the model is expressed as follows: ht β Y(t) = A(t)Kαt H1−α t
(2-3)
In the above formula, H(t) is the stock of human capital, and h(t) refers to the level of human capital possessed by the labor force. As with the effective labor model, we first perform a logarithmic transformation of the function to obtain the following regression equation: ln Yt − ln Ht = ln At + α(ln Kt − ln Ht ) + βlnht
(2-4)
4 Model Checking According to the needs of the model, based on big data analysis technology, the following natural logarithmic values are obtained using the raw data of Zhejiang Province’s GDP, human capital, physical capital, and labor level. See Table 3. Table 3. The natural logarithm of each element from 2008 to 2019 Years LNY LNK LNH LNh 2008 7.12
6.93
9.30
1.99
2009 7.24
7.06
9.35
2.03
2010 7.43
7.17
9.39
2.06
2011 7.57
7.31
9.23
1.99
2012 7.73
7.47
9.19
1.95
2013 7.90
7.68
9.23
1.97
2014 8.06
7.93
9.29
2.01
2015 8.13
8.27
9.32
2.01
2016 8.32
8.61
9.42
2.10
2017 8.52
8.84
9.52
2.20
2018 8.64
9.05
9.51
2.20
2019 8.74
9.29
9.50
2.18
Empirical Research on Population Policy and Economic Growth
391
4.1 The Results of the Effective Labor Model Estimation Are Shown in Table 4: Yt = 10.16 + 0.711 nK + 0.89lnH + μ
(3-1)
It can be seen from Table 4 that the regression coefficients passed the t test, the equation also passed the F test, and the goodness of fit of the equation was good (R2 = 0.987), indicating that the independent variable of the equation has a very strong ability to explain the dependent variable [5]. This shows that Zhejiang’s GDP and capital stock K and human capital stock H have a significant Cobb-Douglas production function relationship. Therefore, the effective model of human capital in Zhejiang Province we get is: H0.89 Yt = At K0.71 t t
(3-2)
The equation also passed the F test, and the goodness of fit of the equation was good (R2 = 0.978), indicating that the independent variable of the equation has a very strong ability to explain the dependent variable [6]. This shows that Zhejiang’s GDP and capital stock K, human capital stock H, and human capital level h have a significant functional relationship. Therefore, we obtain the human capital externality model of Zhejiang Province as: H0.31 h1.33 Yt = A0.69 t t t
(3-3)
The economic significance of the above function is: for every 1 percentage point increase in capital input, Zhejiang’s economic output will increase by 0.69 percentage points based on big data analysis techniques and other conditions unchanged; every time the human capital stock increases by 1 percentage point If other conditions remain unchanged [7]. economic output will increase by 0.31%; for every 1% increase in human capital level, economic output will increase by 1.33% under other conditions unchanged.
5 Analysis of Empirical Results Table 4 shows the growth rate of various factors in Zhejiang Province from 2005 to 2019. Based on this, the contribution rate of each factor to economic growth is calculated. Table 4 shows the contribution rate of each element to the economy under the effective labor model. In Table 4, Y, K, H, and A respectively represent the economic output, physical capital input, human capital stock input, and total factor input of Zhejiang Province. The contribution share contribution rate of various factors to economic growth are examined [8]. The most important reason for the low stock of human resources is the unreasonable labor structure, which leads to slow economic growth [9].
392
J. Wang Table 4. Growth rate of each factor Years Y growth rate K growth rate H growth rate h growth rate 2008 13.62
13.78
4.64
3.84
2009 20.62
11.62
4.49
3.83
2010 14.54
15.76
−14.52
−6.61
2011 17.75
16.93
−4.17
−4.76
2012 18.73
23.14
3.67
2.71
2013 17.12
24.00
6.08
3.76
2014
6.97
24.56
3.45
0.40
2015 21.64
26.32
10.04
9.35
2016 21.83
25.61
10.70
10.62
2017 12.55
23.09
−0.47
0.11
2018 10.93
23.14
−1.28
−2.21
2019 10.10
13.82
−3.70
−2.87
6 In Conclusion Combining the above-mentioned big data analysis techniques and empirical research, we can see: First, population policies can effectively adjust the structure of the labor force, thereby deeply affecting the development of the regional economy. Population policies can strengthen the accumulation of human capital, thereby increasing economic growth. Second, the structure and quality of the labor force have a certain positive effect on economic growth. The optimization of the labor structure reduces the pressure on population support, thereby promoting economic growth. Based on big data analysis technology, for areas like Zhejiang Province that have the characteristics of “getting old before getting rich”, are increasingly losing their traditional comparative advantages and have not yet gained new comparative advantages, the challenging task is mainly through institutional innovation and policies. Adjust, extend the first demographic dividend, and create conditions to tap the second demographic dividend [10]. Specifically, it includes advancing the opening up and cooperation of the world economy, continuing to participate in economic globalization, and exerting dynamic comparative advantages; promoting the transfer of labor-intensive industries to the central and western regions, and improving the stability of labor supply through the reform of the household registration system; Healthy investment will adapt the labor force to structural adjustment; establish a more inclusive social protection system including the social insurance system and social assistance programs. Those sources of economic growth that require deepening reforms have different lengths of time to produce actual growth effects. That is, some reforms can produce immediate effects, while others require time.
Empirical Research on Population Policy and Economic Growth
393
References 1. Tong, M., Liu, H., Gao, Q.: The impact of my country’s population policy on economic growth: an empirical analysis based on inter-provincial panel data. Math. Pract. Knowl. 44(15), 175– 185 (2014) 2. Wang, H., Zhu, L.: An empirical study on the impact of population growth on macroeconomics under the two-child policy% research on demographic structure influence on social economy under “two-child” policy. Econ. Syst. Reform 000 (006), 32–38 (2017) 3. Gong, S., Ouyang, Z.: Population adaptive regression prediction model and empirical analysis. Math. Stat. Manag. (03), 30–34 (2006) 4. Dongfeng, Y., Guoping, X.: Empirical research and policy recommendations on the spatial growth mechanism of my country’s large cities——economic development population growth road traffic land resources. Urban Plan. J. 1, 51–56 (2008) 5. Yufen, C.: An empirical analysis of my country’s resident income, population, education, fiscal policy and monetary policy, and resident consumption models. Math. Stat. Manag. 02, 11–15 (2004) 6. Li, X.: The decisive factor for the changes in the school scale of primary and secondary schools: population changes or policy-driven?——An empirical analysis based on provincial panel data. J. Beijing Norm. Univ. (Soc. Sci. Edit.) 000(004), 126–135 (2012) 7. Xuan, M., Yuxiang, Y., Chengqian, T.: Research on the income determination of floating population from the perspective of hierarchical heterogeneity——an empirical analysis based on hierarchical linear model. Financ. Econ. Theory Pract. 2, 123–129 (2018) 8. Dongjie, G., Bingxin, Y.: An empirical study on family planning, population changes and insufficient consumer demand. Economist 8, 29–37 (2016) 9. Kaiming, G., Jingwen, Y., Lintang, G.: Population policy, labor structure and economic growth. World Econ. 11, 72–92 (2013) 10. Hongluo, W.: Several key issues affecting the trend of China’s population policy. Fujian Forum (Hum. Soc. Sci. Edit.) 01, 144–149 (2010)
Technological Framework the Precision Teaching Based on Big Data Meina Yin(B) and Hongjun Liu Applied Technology College of Dalian Ocean University, Dalian, Liaoning, China
Abstract. Firstly, this paper introduces the precise teaching method based on big data from the perspective of exploring the practical path of differentiated teaching, and introduced the technology and the practice basis of the precision teaching based on big data. Then, this paper designed the technical framework of the precision teaching based on big data, and introduced the four basic principles of the framework even compared the four principles with the views of traditional precision teaching by analyzing a set of application patterns with features like automatic recording, multidimensional observation, precise adjustment as the core. Meanwhile, combining the application practice of the precision teaching method based on big data in model schools, the best practice of the course applied in the precision teaching based on big data was presented by taking the tests explanation course and the pre-test review course as examples. Finally, the promotion strategy of the technical framework of precision teaching based on big data was proposed, expecting to promote the further development of precision teaching and the continuous deepening of teaching reform. Keywords: Big data · The framework of precision teaching · Learning situation analysis
1 Introduction In the traditional classroom teaching, most teachers are difficult to fully grasp the situation of each student before class, so that the concept of “teaching students according to their aptitude” and “differentiated teaching” is often mentioned, but it has been difficult to be implemented. One common solution is to reduce class sizes and improve the quality of teachers, so that each student is focused and receives targeted help and guidance. However, at present, China’s total quantity of high-quality education resources is insufficient, the layout is unreasonable, there is not enough education resources to support small class teaching, so there is a lack of the basis to promote the small class teaching model. With the rise of intelligent technology represented by big data and artificial intelligence, with the help of various educational information systems, teachers can understand the situation of each student in the class from multiple dimensions.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 394–401, 2022. https://doi.org/10.1007/978-981-16-5857-0_50
Technological Framework the Precision Teaching Based on Big Data
395
With the support of relevant teaching theories, more students have the opportunity to get differentiated teaching guidance from teachers and personalized learning recommendation from the application system, which is in line with the development trend of education and teaching in the new era. Such teaching practices are collectively referred to as Big Data Accurate Teaching because they rely on big data technology and accurate evaluation methods of learning effects to achieve differentiated teaching and personalized learning.
2 The Technical and Practical Basis of Big Data Precision Teaching To achieve accurate teaching with big data, on the one hand, it is necessary to grasp the overall situation of students dynamically through the behavior data of students with the support of big data technology. On the other hand, we need to constantly optimize the teaching mode, methods and strategies under the support of the concept of precision teaching. 2.1 Big Data Technology Big data precision teaching mainly involves the following big data technologies: 2.1.1 Learning Behavior Acquisition Technology In all kinds of teaching process information system such as online learning system, online homework system, classroom teaching system, etc., the use of system logs, or real-time services, the record of behavior occurrence time, type of behavior and system context information, and with the support of relevant big data tools, even to track, record and gather real-time and the dynamical process of learning behavior and results. 2.1.2 Learning Situation Analysis Technology Using data statistics, data analysis, data mining and data visualization technology based on the behaviors of student process and the result data to analysis the student individual or group learning styles and habits, academic and psychological conditions to realize the overall grasp of students’ current situation and the pre-evaluation of the students’ future. 2.1.3 Personalized Recommendation Technology Based on the data of learning behavior process and results, a learning style model was established by using psychometric techniques. Knowledge tree construction and cognitive diagnosis were used to build knowledge graph [1]. The collaborative filtering model is established by using collaborative filtering technology. Take use of integrated learning style model, knowledge graph and collaborative filtering model, to provide intelligent recommendation service for students’ personalized learning.
396
M. Yin and H. Liu
2.2 The Method of Precision Teaching and Its Practices The method of Precision Teaching was first proposed by Dr. Lindsley in 1960s. Lindsley found that better learning effects can be obtained by observing and recording the behavior frequency and response speed of the subjects in the free-operant laboratory condition and adjusting the subjects’ activities accordingly [2]. This discovery was initially used in the field of special education and achieved great success. Later, Lindsley extended it to the field of school education, and combined it with Skinner’s new behaviorist learning theory in the practice process, finally forming a systematic and accurate teaching method. The core idea of this method is as follows: for the learning behaviors that can be directly observed, the frequency of behaviors is regularly recorded and plotted in a standardized chart, and the learning effect is judged by comparing the variation trend of the frequency of behaviors reflected in the chart [3]. Based on this, the teaching methods and learning strategies are dynamically adjusted. 2.3 The Integration of Big Data Technology and Precise Teaching Methods in Practice Precise teaching method has been applied in Florida, California, Washington and other areas of the United States since the 1970s and achieved great success. However, only a few of them have been consistently used for a long time and become exemplary applications. By analyzing two typical schools, Sacajawea Elementary School in Great Falls, Montana, and Morningside Academy in Seattle, Washington, This study found that the key factor that leads to precision teaching being best game but no one played, unable to be promoted on a large scale and cannot be used for a long time is that the traditional technical means supporting precision teaching have obvious limitations [4]. The traditional precise teaching only records the data of a few indicators, such as behavior frequency and response time, while the data analysis based on a single dimension will inevitably lead to some problems, such as overgeneralization and strong subjectivity, which will further affect the judgment of teachers and students on the learning effect. On the other hand, teachers and students are required to record their behavior data regularly, fill in forms and draw trend charts, which is cumbersome, error-prone and difficult to adhere to for a long time. The recording process of learning behavior of precision teaching based on big data technology is automatically completed by the education information system, which not only makes the recording process easier, but also makes the recording content more systematic and comprehensive. Behavioral data analysis is completed under the support of big data application systems such as Learning Situation Analysis System, and learning methods and teaching strategies are easier to be adjusted under the support of educational information system [5].
3 Design of Big Data Precision Teaching Technology Framework In order to better help teachers master the precise teaching methods of big data, apply the precise teaching tools of big data, and guide manufacturers to design and develop the
Technological Framework the Precision Teaching Based on Big Data
397
precise teaching products of big data, This research team selected seven representative schools located in Guizhou, Anhui, Zhejiang, Guangdong and other places from more than 70 smart education application demonstration schools in China to carry out case research [6]. Meanwhile, design the big data precision teaching technology framework hereinafter referred to as the framework by comparing with Lindsley’s views on precision teaching. The framework focuses on how technologies and methods should be implemented and applied, and stipulates four basic principles and a set of application patterns, even summarizes several best practices, as shown in Fig. 1.
Fig. 1. Big data precision teaching technology framework
3.1 Basic Principles The design and application of product using the precise teaching method of big data should follow the following four basic principles: 3.1.1 Focusing on Observable Behavioral Data This principle is different from the traditional point of view of precision teaching like focusing on directly observable data behavior. This is because traditional methods can only capture behaviors that can be directly observed, while using big data technology can record hidden behaviors and actions. In the precise teaching of big data, it can be reflected by comparing the content and frequency of students’ evaluation of teachers’ micro-lesson resources and the response time and frequency of class raising hands.
398
M. Yin and H. Liu
3.1.2 Using Multi-dimensional Indicators to Measure Performance This principle is different from the traditional point of view of precise teaching like using frequency to measure performance. In traditional precision teaching, using behavior occurrence frequency, that is, the average number of behavior responses per unit time as a measurement index is more advantageous than using behavior result accuracy [7]. In big data accurate teaching, whether it is behavior occurrence frequency or behavior results accuracy, even more context information of activities can be automatically extracted, used, and using multidimensional index as decision-making basis can avoid the problem of biased, which makes the decision-making more scientific. 3.1.3 The Use of Learning Analysis Tools This principle is different from the traditional point of view of precision teaching like using the standard celeration chart. The standard celeration chart plots the behavior frequency information on a chart with standardized meanings and scales, and measures the learning behavior performance by the change trend of the behavior frequency. Its outstanding advantage is simple and intuitive, the disadvantage is that it reflects less information. Learning situation analysis based on big data can form visual charts based on different themes and dimensions. 3.1.4 Putting the Learner’s Performance as the Sole Basis for Decision-Making This principle is in line with the traditional view of precision teaching like the learners knows best. Big data precision teaching emphasizes the careful design of the teaching and learning process in advance. However, in the design process, the choice of teaching methods, learning strategies and learning contents depends more on the subjective experience of teachers and students or others, and inevitably the learning effect will vary from person to person. 3.2 Application Mode The application mode of precision teaching with big data is shown in Fig. 2, whose core content is automatic recording, multi-dimensional observation and precise adjustment: 3.2.1. automatic recording refers to the educational information system automatically record learning behavior data, including the behavior process data, e.g., hands-up behavior at class, questioning behavior, behavior of starting to work, behavior of submitting the answer, etc., the result data of behavior such as the result of raising hand to answer and evaluation results of answer the questions etc. 3.2.2. Multi-dimensional observation, that is, to observe and analyze students’ learning behavior data from multiple dimensions based on the learning situation analysis tool; In practice, the commonly used learning situation analysis tools include distribution map of knowledge point, distribution map of achievement trend, social network map of teachers and students and the tools. 3.2.3. Accurate adjustment, that is, accurate intervention in class teaching methods and personal learning strategies based on the conclusion of learning situation analysis and
Technological Framework the Precision Teaching Based on Big Data
399
Fig. 2. The application mode of precision teaching
relevant expert experience; In the process of practice, we usually adjust the teaching form, arrange intensive exercises for specific knowledge points, and provide individual tutoring for specific students. 3.3 Best Practices Best practice is a technology, method, mode or process specification that can achieve better practical effect and is summarized and solidified in real cases that follow the basic principles of precision teaching and application mode of big data. It needs to be continuously supplemented, revised, updated and accumulated in the process of application and practice [8]. Under the support of intelligent learning products developed, which are oriented to classroom teaching, independent learning and after-school homework, the seven demonstration schools of intelligent education product application investigated by the research team have all carried out pilot work of big data precise teaching application to varying degrees [9]. Based on the practice results, the research team and the demonstration school jointly summarized a number of best practices, and selected the process specifications that have achieved the best practice effect as the following examples: (1) the best practice of applying big data precision teaching in test explanation class. (2) The best practice of applying big data precise teaching in the review class before the exam.
4 Promotion Strategy of Big Data Precision Teaching Technology Framework Traditional precision teaching has been facing the problem of popularization. In order to avoid the mistakes of big data precision teaching, it is necessary to start from both internal and external aspects: on the one hand, improve the content of the framework to enhance the operability of the framework; On the other hand, the external environment of framework application should be optimized to improve the level of teachers’ application of the framework and cultivate good habits of use.
400
M. Yin and H. Liu
4.1 The Cooperation Between Production, Education, Research and Application Enriches the Content of the Framework The formation of the current framework depends on the observation, summary, refinement and verification of the practical cases and lessons of pilot schools. In order to maintain the representativeness and leadership of the framework, it is necessary to continuously supplement the practical results in the fields of production, learning, research and application to the framework: (1).Based on industrial practice, it is necessary to supplement the framework with more powerful tools for learning analysis. (2).Based on the learning practice, the framework is supplemented with more operable framework learning guide. (3).Based on scientific research practice, the framework is supplemented with key technology models and important theoretical methods. (4).Based on the application practice, more good practices are added for the framework, and the details of the application of basic principles and application patterns in practice should be constantly enriched. 4.2 Collaborative Promotion Between Regions and Schools so as to Enhance the Application Ability of Teachers Big data precision teaching has significantly changed the traditional teaching mode and method. Therefore, schools and even the region must cooperate to promote the reform of management system and create an appropriate external environment for teaching application [10]. At the same time, teachers should not only change their teaching habits, but also improve their information-based teaching leve by learning situation analysis tools. The ability level of teachers to apply the framework can be divided into three levels: the first level, the teacher can understand and interpret the visual chart of learning situation analysis; at the second level, the teacher can make comprehensive use of the extracted index data to create a visual chart of learning situation analysis. At the third level, the teacher can extract more index data from learning behavior data by summarizing and modeling methods based on my own teaching practice, and create a visual chart of learning situation analysis.
5 Conclusion In short, the technical framework of precision teaching with big data is a set of method system that has been tested by practice and helps to improve the teaching quality and learning effect. It is expected that the framework will be popularized nationwide on a large scale with the arrival of the new era of Educational Informatization 2.0 with the improvement of the content and the improvement of teachers’ ability.
References 1. State Council: National Education Development—the 13th Five-Year Plan, vol. 28, no. 7, p. 10 (2018). http://www.gov.cn/zhengce/content/2017-01/19/content_5161341.htm
Technological Framework the Precision Teaching Based on Big Data
401
2. Ministry of Education. Educational Informatization 2.0 Action Plan. http://www.moe.gov.cn/ srcsite/A16/s3342/201804/t20180425_334188.html 3. Binder, C., Watkins, C.L.: Precision teaching and direct instruction: measurably superior instructional technology in schools. Perform. Improv. Q. 3, 74–96 (1990) 4. Lindsley, O.R.: Precision teaching: by teachers for children. Teach. Except. Child. 3, 10–15 (1990) 5. Lindsley, O.R.: Precision teaching’s unique legacy from B. F. Skinner. J. Behav. Educ. 2, 253–266 (1991) 6. Athabasca University. Precision teaching: Concept definition and guiding principles. https:// psych.athabascau.ca/open/lindsley/concept.php 7. Zou, X., Jia, W.: Research on change data capture based on database log. Microcomput. Syst. 33(3), 531–536 (2012). (in Chinese) 8. Zhang, L., Yan, Z.: Oracle database log analysis based on LogMiner. Comput. Netw. 3, 145–147 (2013). (in Chinese) 9. Lou, Y.: An Introduction to Big Data Technology. Tsinghua University Press, Beijing (2017).(in Chinese) 10. Zhang, J.: Big Data Daily Record Architecture and Algorithm. Electronic Industrial Press, Beijing (2014).(in Chinese)
Analysis of the Intervention of Yoga on Emotion Regulation Based on Big Data Shasha Wang and Yuanyuan Liu(B) Hebei Polytechnic Institute, Hebei 050000, Shijiazhuang, China
Abstract. Yoga is a comprehensive physical, psychological and spiritual training, and can help everyone to establish a special self-concentration mechanism, reduce self-perception and change cognitive styles. Because of the unique benefits of yoga to people’s mental and physical stamina and simple and relaxing sports methods, people have long loved it very much. The research results of yoga education in various fields such as psychology and medical treatment have gradually increased, and it has developed into an important subject of scientific and technological research at this stage. With the establishment of the research direction of yoga education in my country, researchers hope to draw more systematic and specific conclusions. A good mood is not only related to the development of anyone’s physical and mental health, but also related to the development of their cognitive range, the improvement of their posture, the cultivation of good character and the ability to adapt to the current environment. Many researchers have set their research views on the relationship between yoga and emotions. This may be one of the reasons why yoga status continues to grow in psychological research. This article focuses on the intervention of yoga based on big data in emotion regulation. First of all, through literature research methods to explain the method of emotion regulation and the role of yoga exercise in emotion regulation, and through experiments to verify the role of yoga on the regulation of emotions, the experimental results show that after adjusting the yoga group, the intensity, depression, fatigue, energy and the anger score was significantly different from before the experiment (r = 1.56–4.45, P < 0.05). The panic scale score did not change significantly (r = 2.07, P > 0.05). Keywords: Big data · Yoga practice · Emotion regulation · Intervention analysis
1 Introductions More and more scientists and researchers have conducted empirical exploration and research on problems related to yoga exercise [1, 2], and these experiments have shown that yoga exercise can not only effectively reduce negative emotions such as stress and depression, but also Improving their mental health can also encourage them to increase their positive emotions [1, 3]. With reference to decades of research on the direction of yoga, it is obvious that although the research fields involved are wide and many results have been achieved, most of them are concentrated on various results of meditation and © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 402–409, 2022. https://doi.org/10.1007/978-981-16-5857-0_51
Analysis of the Intervention of Yoga on Emotion Regulation
403
posture exercises, and there are few people. The reason for this effect is studied through the structure of the human body [3, 4]. Although some studies have shown that people who practice yoga for a long time have a higher gray matter density in their brain regions, this provides an important explanation for certain yoga effects from the perspective of human body structure and knowledge, and from the overall trend, the physiological reasons for this effect are still being discussed about yoga, it is still in its infancy, and more research is needed in the future [5, 6]. For the intervention and analysis of emotion regulation in yoga learning, among many related results, many results have come to the conclusion that the various psychological activities of human beings are closely related to the psychological response of the body, psychological and spiritual. The influence of factors on the body is sometimes far greater than other physiological and social factors [7]. As the current yoga exercise is a sport with obvious human physiology and mental interaction, Chinese researchers have tried to prove and verify the original hypothesis through experimental methods, and also found that practicing yoga can improve their mental quality and emotions [8]. Some researchers have clearly scanned the blood flow of yoga practitioners and the brain for 15 y. They can do it in meditation. At the same time, the blood flow to the frontal cortex of the brain has also increased significantly. A large number of studies have shown that the function of this field is not only related to people’s attention, but also includes more complex factors that affect people’s moral processing methods: cognitive control of the social and emotional response advantages caused by moral dilemmas, human egocentricism and the production of anger, frustration or other things like moral disgust [9]. Some researchers have made different explanations on emotion regulation based on their own viewpoints. Taking the psychological activity process of human society as its starting point, it is generally believed that emotional regulation is not only a process of adapting to personal emotional experience or related behaviors and feelings, but also a process of adapting or maintaining emotional stimuli, experience and cognition [10, 11]. This article focuses on the research on the intervention of yoga on emotion regulation based on big data. Firstly, the literature research method is used to explain the emotion regulation methods and the effect of yoga practice on emotion regulation, and the effect of yoga on emotion regulation is verified through experiments.
2 Research on Yoga and Emotion Regulation 2.1 Methods of Emotion Regulation (1) Expression adaptation method In other words, through this conscious way to change a person’s expression and attitude. After changing the facial expressions and attitudes, they will have an impact on the inner emotional experience. For example, when people’s emotions are extremely tense, they can consciously relax the muscles of the face and body; so when our emotions become very depressed, we can use this opportunity to keep ourselves smiling.
404
S. Wang and Y. Liu
(2) Breathing adjustment method Breath adjustment is an effective means to deal with psychological and emotional fluctuations. Through a deep breath, people’s restless state of mind can be stabilized. If you are emotionally agitated, you can use slow exhalation and space for inhalation training to achieve a goal of relaxing the mind. (3) Transfer method Focus on the negative aspects of negative emotions. When you encounter unpleasant things and feel depressed, you can also consider the beautiful scenery and the things you like, so that most of the anger accumulated in your heart will disappear. (4) Methods of music adaptation Music can effectively regulate people’s physical and mental state. When listening to beautiful music, people will feel happy and comfortable, eliminating tension and reducing fatigue. 2.2 The Role of Yoga Practice in Regulating Emotions (1) The regulating effect of meditation on emotions Emotions mainly come from the self’s subconscious. When facing a problem, the psychology is too much interfered by the outside world, and the thought will transform the thought into emotion in the subconscious. The practice of meditation can make people reach a state of liberation and balance the body and mind. In the process of meditation practice, the effect of eliminating thought paragraphs, eliminating worries and clearing thoughts can be achieved, bring people a clear mind and a peaceful inner world. (2) Regulating function of available breathing mode Respiration is the foundation of human physical and mental health. There are three breathing methods in yoga: abdominal breathing, chest breathing and full breathing. Regulate emotions by controlling your own breathing. Most beginners use the most common abdominal breathing when practicing yoga, which is long and deep. A few students use chest breathing. Compared with the other two breathing methods, the effect of chest breathing and the degree of shortness of breath are smaller. Some people who do yoga for a long time will use the method of full breathing. When facing challenges, these breathing techniques can calm the mind, relieve tension and stress, and regulate emotions. (3) The regulating effect of asanas on emotions Anxious emotions make people often feel tired, insomnia and endocrine disorders. There are several forms of yoga asanas, each with its own unique effects. Through asana exercises, you can stretch and chew the limbs and various organs of the body to promote blood circulation, strengthen various body systems, and improve the discomfort caused by insomnia, headaches, and endocrine disorders. It can effectively relieve physical and mental fatigue of asanas and experience physical and mental relaxation. It helps to regain self-confidence and better deal with the challenges in study and life.
Analysis of the Intervention of Yoga on Emotion Regulation
405
2.3 Big Data Algorithm (1) Min count algorithm The probability density function sets x as a random variable. If there is a nonnegative real number f(x), so that for any real number a < b, there is P{a ≤ x < b} = ff(x)dx, then X is called continuous Type random variable, f(x) is the probability density function of x. The Min Count algorithm was proposed in Reference 667]. The algorithm performs cardinality estimation based on the statistical information of the hash result. As the name implies, it is based on the minimum value of the sequence to estimate the cardinality. Assuming that the minimum value of the hash result of all elements in the multiset is X, the algorithm approximates that the estimate of the set cardinality n is one. (1) The minimum probability density f(x) of n random X-machine uniform variables in the interval is n (1–x)”–1, so the mathematical expectation of the minimum value satisfies the following equation: 1 1 (1) x ∗ n(n − 1)n−1 dx = E(M ) = n+1 0 According to the calculation result of the minimum mathematical expectation of the interval [0,1], it is easy to think of: when one is expected, its value is approximately equal to M. When the min Count algorithm is analyzed and studied mathematically, it is found that x = 0 is one. Mathematical expectation integral equation-a divergence point, the following formula: 1 1 1 (2) E( ) = ∗ n(1 − x)n−1 dx = ∞ M 0x (2) LogLog Counting algorithm The main idea of the algorithm: For the data set D, scan all the elements x in the set, and for each element x scanned, it is mapped into a binary bit string of {0,1} through a hash function, denoted by p(x) the position where the first number “1” appears in the binary bit string, such as p(,…,p0(00..) = 2, etc., when all elements in the set are hashed, record the first hash result the maximum value of the position where “1” appears, expressed by pmx (x), can be considered as the base estimate: E = 2ρmin(x)
(3)
3 The Intervention Experiment of Yoga on Emotion Regulation Based on Big Data 3.1 Building a Sentiment Dictionary Based on Big Data Through the summary of National Taiwan University Chinese Sentiment Polarity Dictionary (NTUSD) and HowNet, the HowNet Sentiment Dictionary was summarized, and 12340 sentiment words were obtained, which constituted the basic dictionary of the sentiment dictionary. Since this research mainly focuses on negative emotions, it mainly focuses on expanding the term negative emotions.
406
S. Wang and Y. Liu
3.2 Expansion of the Emotional Dictionary This article uses natural language processing technology to expand the negative dictionary. For natural language editing, NLTK is a software package for natural language editing tools. Contains a large number of chapters, data and documentation resources. NLTK supports the use of Python to create a complete set of word processing products. The entire toolbox basically covers the basic tasks of natural language processing. Including text segmentation, label recognition, text classification, syntax analysis, semantic conclusions, etc. provide standard interfaces and standard implementations. 3.3 Experimental Design (1) After the students of the experimental group enter the laboratory, keep the environment quiet to avoid external interference. First, sit still and pay attention to breathing for 5 min. Yoga teachers (with professional certificates) will guide practitioners in the correct physical adjustment, mental adjustment, breathing and self-image adjustment in the natural environment. After 5 min, practice asana yoga. During the exercise, students should cooperate with breathing, watch the teacher’s movements, and gently stretch their limbs to the edge of their limits. The teacher reminds you to keep your own physical limits and don’t compare them with others. After 40 min of exercise, rest and meditate for 15 min, lie on your back, and follow the instructions to completely relax every part of your body from head to toe. After relaxing, the coach will take you into a state of contemplation. The whole process takes 60 min. Practice five times a week for eight weeks. All subjects in this experiment have received specific personal guidance and concentrated exercises to ensure that the exercise methods are correct and without intervention. Participants in the yoga group only performed yoga training and did not do aerobic exercise for eight weeks. (2) In eight weeks, the students in the empty control group carried out activities according to their own living habits and did not receive yoga training. 3.4 Questionnaire Survey Psychological evaluation questionnaires were distributed to the experimental group and the blank group before and after the experiment. The number of questionnaires issued was 30, and the number of questionnaires returned was 25.
4 Analysis of Experimental Results 4.1 Intra-group Comparison of Scores on the Emotional Subscales Before and After the Experiment Based on the artificially constructed emotional dictionary, the experimental group numbers of six emotional words: tension, anger, fatigue, depression, energy, and panic are randomly selected in the emotional dictionary for comparison. The experimental results are shown in Table 1:
Analysis of the Intervention of Yoga on Emotion Regulation
407
Table 1. Number of experimental components Before training After training Nervousness 3.40
2.71
Anger
2.13
1.42
Fatigue
2.48
2.26
Depression
2.07
1.56
Energy
3.48
4.45
Confusion
2.31
1.56
6 5
Fraction
4 3
4.45 3.48
3.4 2.71
2.48
2.13 2
2.26
2.31
2.07 1.56
1.42
1.56
1 0 Nervousness
Anger
Fatigue
Depression
Energy
Confusion
Mood Before training
After training
Fig. 1. Number of experimental components
After adjusting the yoga group, the intensity, depression, fatigue, energy and anger scores were significantly different from those before the experiment (r = 1.56–4.45, P < 0.05). The panic scale score did not change significantly (r = 2.07, P > 0.05). Figure 1 shows that after continuous preparation for exercise, the negative emotions of the yoga group decreased, while the positive emotions of the yoga group increased. The yoga practice group was improved. Among them, fatigue is the most obvious emotional change. The students in the yoga group usually respond that their emotions have calmed down, and they are good at communicating with others, and their vitality has also increased. 4.2 The Number of Scores in the Blank Group Before and After the Experiment Based on the artificially constructed emotional dictionary, randomly select the blank component numbers of the six emotional words of tension, anger, fatigue, depression, energy, and panic in the emotional dictionary for comparison. The experimental results are shown in Table 2.
408
S. Wang and Y. Liu Table 2. Number of blank components Before training After training Nervousness 3.44
3.41
Anger
2.28
2.34
Fatigue
2.69
2.45
Depression
2.39
2.32
Energy
3.30
3.10
Confusion
2.51
2.52
2.52 2.51
Confusion
3.1
Mood
Energy
3.3
2.32 2.39
Depression
2.45
Fatigue
2.69
2.34 2.28
Anger
3.41 3.44
Nervousness 0
0.5
1
1.5
2
2.5
3
3.5
4
Fraction After training
Before training
Fig. 2. Number of blank components
It can be seen from Fig. 2 that the scores of the empty list group are not significantly different between the six emotions before and after the experiment.
5 Conclusions With the rapid development of modern economy, environmental problems, learning and social interaction, employment pressure and love cause people to face tremendous pressure. When facing emotions, they should not be ignored and must learn to adjust them reasonably. Releasing the pressure can calm the mind and promote the harmonious and healthy development of the body and mind. Through the analysis of yoga’s emotion regulation, it is not difficult to see that although yoga is not the only way to regulate stress, the effect of yoga practice on improving stress is also obvious. Use the simplest exercise method to achieve the purpose of calming the body and calming the mind. Let yoga practitioners maintain a positive and pleasant attitude, and achieve the harmony and unity of body, mind and spirit in a real sense.
Analysis of the Intervention of Yoga on Emotion Regulation
409
References 1. Christopher, M.S., et al.: A pilot study evaluating the effectiveness of a mindfulness-based intervention on cortisol awakening response and health outcomes among law enforcement officers. J. Police Crim. Psychol. 31(1), 15–28 (2015). https://doi.org/10.1007/s11896-0159161-x 2. Maliken, A.C., Katz, L.F.: Exploring the impact of parental psychopathology and emotion regulation on evidence-based parenting interventions: a transdiagnostic approach to improving treatment effectiveness. Clin. Child. Fam. Psychol. Rev. 16(2), 173–186 (2016) 3. Bouazza, H., Bendella, F.: Adaptation of a model of emotion regulation to modulate the negative emotions based on persistency. Multiagent Grid Syst. 13(1), 19–30 (2017) 4. Kittler, C., Gische, C., Arnold, M., et al.: The effect of a mindfulness-based programm on athletes’ emotion regulation. Z. Sportpsychol. 25(4), 146–155 (2018) 5. Alavizadeh, S.M., Sepa, H., Mansour, M., Entezari, S., et al.: Development and validation of emotion regulation strategies in germophobia questionnaire in Iran. Pract. Clin. Psychol. 8(4), 307–316 (2020) 6. Van Meter, A.R., Youngstrom, E.A.: Distinct roles of emotion reactivity and regulation in depressive and manic symptoms among euthymic patients. Cogn. Ther. Res. 40(3), 262–274 (2015). https://doi.org/10.1007/s10608-015-9738-9 7. Yang, Y., Perkins, D.R., Stearns, A.E.: “I started to feel better now”: qualitative findings from client narratives on early recovery in inpatient substance use treatment. Int. J. Ment. Heal. Addict. 18(4), 1048–1066 (2020) 8. Trueba, A.F., Pluck, G.: Social support is related to the use of adaptive emotional regulation strategies in ecuadorian adolescents in foster care. Psych 3(2), 39–47 (2021) 9. Hamid, N., Boolaghi, Y., Moghadam, A.: The efficacy of acceptance and commitment based therapy (ACT) on depressive symptoms and cognitive emotion regulation strategies in depressive students. Int. J. Psychol. 12(1), 5–29 (2018) 10. Zhang, S., Shi, C., Jiang, X., et al.: Analysis of the trend of global power sources based on comment emotion mining. Glob. Energy Interconnection 3(3), 283–291 (2020) 11. Seo, J.H., Choi, J.T.: Opinion mining analyses by online media on the introduction of big data-based free semester system. J. Eng. Appl. Sci. 12(10), 2725–2730 (2017)
Innovation of Employee Performance Appraisal Model Based on Data Mining Jingya Wang(B) Business School of Henan University, Kaifeng 475000, Henan, China
Abstract. The most fundamental factor for the survival and development of an enterprise is human resources. In order to make the employees’ enthusiasm, sense of responsibility, and sense of belonging stronger, high-standard and high-quality performance management is a key step in enterprise human resource management. The application of data mining technology to employee performance appraisal is an effective method to improve management level and promote management efficiency. This article aims to study the innovation of employee performance appraisal model based on data mining. This article is based on data mining technology, using information management tools and statistical analysis tools, focusing on selecting appropriate business application models, so that subjective factors in the business are adjusted and weakened, so as to extract implicit but useful information and summarize relevant content. The model and internal laws provide a comprehensive, objective, and fair specific method for companies to carry out employee performance evaluation, and provide a reliable basis for companies to hire, evaluate, select, and reward employees. The non-parametric Bootstrap method is used to test, and the results show that the indirect effect value of feedback seeking behavior is 0.19, and the 95% confidence interval is [0.123, 0.267], excluding zero, it shows that the feedback seeking behavior plays a part of the mediating role, and hypothesis H2 holds. Keywords: Data mining · Performance appraisal · Employee innovation performance · Proactive personality
1 Introduction Performance appraisal is not only the basis for employee salary adjustment and promotion training, but also an effective means for employees to realize self-exploration and find shortcomings in enterprises [1, 2]. The scientific performance appraisal mechanism can not only retain and attract talents, but also stimulate the vitality, motivation and creativity of the workforce, thereby enhancing the competitiveness of the bank. Performance appraisal is directly related to each employee’s income, promotion and career development, and is a very important part of enterprise network human resource management [3, 4]. Only by establishing and using a scientific and reasonable performance appraisal mechanism can we comprehensively, objectively and correctly evaluate the contributions and performance of employees, achieve comprehensive, objective, and © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 410–419, 2022. https://doi.org/10.1007/978-981-16-5857-0_52
Innovation of Employee Performance Appraisal Model
411
correct rewards and punishments, truly motivate employees, and ultimately achieve the goal of improving corporate competitiveness. In the research of employee performance appraisal model, many scholars at home and abroad have conducted extensive research on it. For example, Ali Z believes that the employee’s performance evaluation should be performed by the manager who knows the employee best. Therefore, the employee performance evaluation is often performed by the employee’s line manager [5]; Maulidina R believes that a highly satisfactory performance evaluation result will increase the employee’s sense of belonging to the company and reduce the occurrence of employee turnover [6]; Chahar B performance appraisal should be based on static indicators and dynamic indicators. Combination, the staged assessment is combined with each project, and the individual and the team are combined [7]. This article first sorts out and summarizes the demand analysis and basic goals of the theoretical model of employee performance appraisal based on data mining, and based on the theory of resource preservation, discusses the mechanism of team member exchange on employee innovation performance, and constructs employee performance appraisal based on data mining. The theoretical model has been researched hypotheses. Then through empirical research on 314 samples, and taking H Company as the survey object, the company deeply explored the existing problems and reasons of the company’s employee performance appraisal, and on this basis, proposed employee performance optimization opinions based on data mining, to provide a reference for enterprises to establish a scientific and reasonable employee performance appraisal mechanism.
2 Innovative Research on the Theoretical MODOF Employee Performance Appraisal Based on Data Mining 2.1 Demand Analysis and Basic Goals of the Theoretical Model of Employee Performance Appraisal Based on Data Mining (1) Application demand analysis There are six main requirements for the performance appraisal data mining information system (hereinafter referred to as the appraisal system): the system should have a relatively complete data storage capacity; the appraisal system should have the functions of data mining and data analysis. The performance appraisal that lacks the function of data mining is a product of subjective judgment in a sense, which contradicts the requirement of “objectivity” in performance management. At the same time, data mining can be repeated; the evaluation system should have good scalability. The adjustment of data mining models should be supported. At present, the combination of data mining and performance management is still in the initial running-in stage. In practice, it may be found that some data mining is not suitable for performance appraisal and needs to be further revised; the appraisal system should have an interface for outputting results. The results of performance appraisal will be related to the salary and promotion of employees, so the output port is required to be connected with the corresponding modules of other information systems; it has a variety of intuitive output of graphics and charts; the system has functions such as operation logs and authority management.
412
J. Wang
(2) Economic feasibility analysis The realization of this system requires the enterprise to have established a relatively complete informational basis for human resource information collection. The deployment of data mining systems on this basis is more reasonable, while companies lacking human resource informationization work have a large investment in the early stage and require a longer preparation period. (3) Technical feasibility analysis After years of research and practice of data mining technology, its technical model has become more mature, and its model types and mining functions have become more and more powerful [8, 9]. In employee performance appraisal, through reasonable index system selection and model training, the purpose of discovering knowledge from basic information and providing decision support can be achieved. At present, the basic information collection and work records of the employees of Company H have been fully carried out, and they have all achieved informatization. Their hardware and software levels have reached the required standard and have the conditions for the implementation of the information system. (4) Basic goals of information system development The basic goal of the performance appraisal data mining information system is to use the data mining model to weaken the impact of subjective factors in the performance appraisal as much as possible, and reduce the subjective bias caused by factors such as proximate effect and primary cause effect in performance appraisal. At the same time, suitable data mining methods can be selected according to the different types of indicators in the performance appraisal, and the specific business work and information science and technology can be organically combined, so that the scientific nature of the performance appraisal is improved, reducing the randomness of artificial scoring, and making the appraisal more fairness, impartiality, and more authoritativeness have further improved the persuasiveness of the evaluation work and the participation enthusiasm and cohesion of employees. 2.2 Theoretical Model and Research Hypothesis of Employee Performance Appraisal Based on Data Mining (1) Team member exchange and employee innovation performance According to the theory of resource preservation, high-quality team member relationships mean good relationships between members, which is a conditional resource for employees, bringing intellectual and psychological resources to employees, and will promote their innovation performance. First of all, when the TMX level is higher, there will be more sharing and communication among members, which will help stimulate the creative thinking of employees. Secondly, a high level of TMX is conducive to the formation of social support and sense of security among employees, because it promotes mutual care, mutual assistance, and mutual trust among members, which will make individuals have a stronger sense of identity with innovation goals and contribute to innovation. Therefore, the following assumptions are made: H1: Team member exchange can significantly positively affect employee innovation performance.
Innovation of Employee Performance Appraisal Model
413
(2) The mediating role of feedback seeking behavior Feedback seeking behavior means that employees obtain valuable information by observing the external environment or external behaviors, and then make self-adjustment to meet the needs of organizational performance. According to resource conservation theory, high-level TMX as a rich work resource will motivate employees to work hard and make positive behaviors. Therefore, when employees encounter uncertain problems at work, they are more willing to seek feedback from team members. First of all, the accessibility of the feedback source and the strength of its relationship will affect the individual’s willingness to seek feedback [10]. The higher the quality of the relationship between employees and the source of feedback, the more conducive it is for them to conduct feedback seeking to adjust their own behavior [11]. Secondly, individual value perception obviously promotes seeking motivation, while cost perception is just the opposite. Therefore, high-quality TMX will reduce the cost of employees’ feedback seeking, thereby promoting employees’ feedback seeking behavior. According to the value-added spiral effect of the resource preservation theory, when the individual holds more resources, the resource will have a value-added effect, and the individual can obtain more new resources. The information that employees seek to obtain through feedback is a kind of resource, and these resources will add value and contribute to the improvement of employee innovation performance. First, seeking feedback will let employees know what their superiors and colleagues think about his work performance, so that they can change their original thinking mode and think and work in a new way. Second, when individuals actively seek feedback, they can make full use of surrounding information, which will improve the individual’s adaptability to work goals, promote individual innovation behavior, and improve innovation performance [12]. Therefore, when the exchange level of team members is high, it will promote individual feedback-seeking behavior, and through feedback-seeking behavior, it helps to improve the innovation performance of employees. Based on this, the following hypotheses are proposed: H2: Feedback seeking behavior plays an intermediary role in the relationship between team member exchange and employee innovation performance. (3) The moderating effect of proactive personality Proactive personality is a dynamic process that has the tendency to actively change, affect the surrounding environment, and then shape the initiative. Specifically, unlike individuals who passively adapt to the environment, individuals with a proactive personality will be better at identifying opportunities in the surrounding environment and take a series of actions until the goal is achieved. Therefore, in high-level team member exchange relationships, In order to successfully complete work tasks, individuals with a higher degree of proactive personality are more willing to seek feedback. Therefore, the following assumptions are made: H3: Proactive personality has a significant positive moderating effect in team member exchange and feedback seeking behavioral relationships. In summary, the theoretical model studied in this paper is shown in Fig. 1:
414
J. Wang
Proactive personality
Team member exchange
Feedback seeking behavior
Employee innovation performance
Fig. 1. Theoretical model
2.3 FCM Data Mining Algorithm Based on Hadoop According to the basic idea of FCM algorithm serialization, the Map Reduce design of FCM algorithm is divided into two parts according to the numerator and denominator: SOk = Slk =
Nk
um Xj J=1 ij
Nk
um J =1 ij
(1) (2)
There are two main steps in the Map phase: Calculate the membership degrees of all the marine environmental monitoring data samples on each map node and the selected initial cluster centers {uij }, and calculate the marine data samples corresponding to uijm and uijm Xj .
3 Innovative Research Design and Data Collection of Employee Performance Appraisal Model Based on Data Mining 3.1 Research Content This paper takes the employees of the team-based work unit as the research object to verify the feasibility of the employee performance appraisal model designed in this research. 3.2 Research Methods This study uses the questionnaire survey method to conduct research, and distribute and collect questionnaires to H Company through the Internet. 3.3 Data Collection A total of 383 questionnaires were issued this time, and a total of 383 questionnaires were returned. Among them, 69 invalid questionnaires were excluded. The number of valid questionnaires was 314. The effective response rate of the questionnaires was 82%.
Innovation of Employee Performance Appraisal Model
415
4 Data Analysis and Results 4.1 Common Method Bias and Confirmatory Factor Analysis First of all, the Harman single factor test is performed on the items in the scale. The results show that the cumulative explained variance is 62%, and the explained variance of the first principal component is 18.6%, indicating that there is no serious common method deviation. Secondly, a confirmatory factor analysis was performed on the four variables in the study, and the results showed that the four-factor model had a better fit. Each fit index was χ2 = 478.022, df = 247, χ2/df = 1.935 < 3, CFI = 0.938, IFI = 0.939, TLI = 0.925, RMSEA = 0.055 < 0.08, indicating that the discriminating validity between variables is good. 4.2 Regression Analysis Direct effect test. The M6 in Table 1 shows that the hypothesis H1 holds. Mediation effect test. According to M2, M7, and M8 in Table 1, feedback-seeking behavior plays a part of the mediating role. In order to further test the mediation effect, the non-parametric Bootstrap method was used to test. The results showed that the indirect effect of feedback seeking behavior is 0.19, and the 95% confidence interval is [0.123, 0.267], excluding zero, indicating that feedback seeking behavior plays a part Mediation, assuming that H2 holds. Moderating effect test. According to M4, hypothesis H3 holds. 4.3 Analysis of H Company’s Satisfaction with the Current Performance Evaluation System According to the survey of H company’s employees’ satisfaction with the current performance appraisal system, as shown in Table 2, 63% of the respondents are dissatisfied with the current unit performance appraisal system’s appraisal options, and they are not satisfied with the scientific nature of the system indicators and the company’s strategy. More than half of the unsatisfactory results were selected on the questionnaire results of the consistency of goals, department goals and personal goals. It can be seen from Fig. 2 that the existing performance appraisal system does have many problems in practice. In the process of implementing the performance appraisal system, there is no performance monitoring and communication, and the strategic level does not pay enough attention to it. When it is implemented as a systematic project, the performance appraisal process is only a certain period of Subjective evaluation of work performance is carried out, and the systemic nature of the evaluation is questioned. 4.4 Analysis of H Company’s Satisfaction with Current Performance Appraisal Methods Through the survey of H company’s satisfaction with the current performance appraisal methods, the results are shown in Table 3: 56% of the employees are dissatisfied with
: P < 0.05*; P < 0.01**; p < 0.001***
1.942
F
25.575
35.849
0.438
32.491
0.446
0.015
0.320
0.460
R2
0.451
0.031
R2
0.438***
0.386***
0.008
– 0.021
– 0.058*
0.013
0.048
0.168* 0.333
0.390***
0.643*** 0.461***
0.008
– 0.019
– 0.062*
0.017
0.043
0.018
– 0.023
– 0.058
0.006
0.006
TMX* proactive personality
Proactive personality
Feedback seeking behavior
TMX
0.058
– 0.027
Unit nature
Working years
Education level
0.002
– 0.071
3.292
0.035
0.051
0.043
34.318
0.390
0.401
0.643***
0.005
0.082*
0.014
0.019
0.001
– 0.007
0.015
– 0.116*
27.468
0.337
0.349
0.516***
0.053
0.015
0.034
0.014
– 0.155**
M7
M6
– 0.192**
M4
M5
M3
M1
M2
Employee innovation performance
Feedback seeking behavior
– 0.079*
Age
Gender
Variable
Table 1. Hierarchical regression results
38.607
0.457
0.469
0.296***
0.453***
0.038
0.012
0.031
0.017
– 0.117*
M8
416 J. Wang
Innovation of Employee Performance Appraisal Model
417
Table 2. H Company’s satisfaction with the current performance appraisal system Very satisfied
Satisfaction
Basically satisfied
Dissatisfied
Consistency between strategic goals and personal goals
6%
23%
16%
56%
System science
14%
9%
20%
57%
System justice
22%
18%
23%
36%
Systematic
14%
21%
30%
35%
Overall evaluation of the system
12%
10%
15%
63%
Very satisfied
satisfaction
Basically satisfied
Dissatisfied
70%
63%
percentage: %
57%
56%
60% 50%
36%
40% 30%
23% 16%
20% 6%
10%
20% 14% 9%
22% 23% 18%
35% 30% 21% 14%
15% 12%10%
0% Consistency between strategic goals and personal goals
System science
System justice
Systematic
Overall evaluation of the system
Fig. 2. H Company’s satisfaction with the current performance appraisal system
Table 3. H Company’s satisfaction with current performance appraisal methods Very satisfied
Satisfaction
Basically satisfied
Dissatisfied
Implementation of the evaluation system
14%
46%
20%
20%
Motivating
8%
13%
23%
56%
Evaluation method
14%
29%
33%
24%
the company’s current performance appraisal methods; a total of 60% of employees said they were satisfied with the implementation of H company’s existing evaluation system.
418
J. Wang Very satisfied Basically satisfied
Implemen tation of the evaluation system Motivating
Evaluation method
satisfaction Dissatisfied
14%
46%
20% 20% 8%
13%
23%
14%
56% 29%
24%
33%
Precentage: % Fig. 3. H Company’s satisfaction with current performance appraisal methods
By observing Fig. 3, we can find that the questionnaire results in the implementation of the appraisal system are quite satisfactory, indicating that in the process of performance promotion, knowledge workers still fully affirm the implementation of performance appraisal, it’s just that the scientific evaluation methods for knowledge workers need to be further optimized.56% of the interviewees chose to be dissatisfied with the motivation of the company’s existing performance appraisal system, indicating that the appraisal method was not designed scientifically and rationally according to the characteristics and work behavior of different employees, so that the staff did not really benefit from the appraisal. In terms of appraisal methods, many employees are not professional human resource managers, so the questionnaire results of the appraisal method are evenly distributed, which indirectly indicates that the training of the index method is not in place.
5 Conclusion Performance appraisal is an important part of human resource management. It plays a very important role in how to rationally use talents and develop resources. It is a “ruler” with instructive significance. Aiming at the problem of excessive subjectivity in the performance evaluation process, this paper uses data mining tools to model and parameterize the subjective judgment as much as possible, and use multi-angle and multi-level data collection, statistics, and induction to obtain more objective and accurate results. The following conclusions are drawn: team member exchange has a significant positive effect on employee innovation performance; feedback seeking behavior plays a part of the mediating role in the relationship between team member exchange and employee innovation performance; proactive personality positively regulates team member exchange and Feedback-seeking relationships between behaviors. Therefore, in the performance appraisal, it is recommended: create conditions to improve the level of relationship
Innovation of Employee Performance Appraisal Model
419
between team members. Nowadays, many organizations work in the form of teams. Leaders should create more opportunities and a better working atmosphere, strengthen exchanges between members, thereby affecting the exchange relationship between members and optimize the environment for employees to seek feedback. Employees can collect valuable information by seeking feedback, which is conducive to self-innovation and is very important to innovation performance. Therefore, leaders should formulate and implement relevant systems in the organization, build relevant information exchange platforms, so that employees have a more sense of organizational support for feedback and seek; focus on cultivating employees’ proactive personality. Employees with higher proactive personality are more proactive, good at taking advantage of opportunities, and changing the environment, which is conducive to the growth and development of the company. Therefore, leaders should pay attention to selecting and training employees with these characteristics, so that they can become the main force of enterprise innovation.
References 1. Islami, X., Mulolli, E., Mustafa, N.: Using management by objectives as a performance appraisal tool for employee satisfaction. Future Bus. J. 4(1), 94–108 (2018) 2. Aydın, A., Tiryaki, S.: Impact of performance appraisal on employee motivation and productivity in turkish forest products industry: a structural equation modeling analysis. Drvna Industrija 69(2), 101–111 (2018) 3. Rony, Z.T.: Competency model of employee performance appraisal preparation in the company construction: a qualitative method. Syst. Rev. Pharm. 11(12), 2071–2077 (2020) 4. Ashford, S.J., Cummings, L.L.: Feedback as an individual resource: personal strategies of creating information. Organ. Behav. Hum. Perform. 32(3), 370–398 (1983) 5. Ali, Z., Mahmood, B., Mehreen, A.: Linking succession planning to employee performance: the mediating roles of career development and performance appraisal. Aust. J. Career Dev. 28(2), 112–121 (2019) 6. Maulidina, R., Arini, W.Y., Damayanti, N.A.: Analysis of employee performance appraisal system in primary health care. Indian J. Public Health Res. Dev. 10(12), 1950 (2019) 7. Chahar, B.: Performance appraisal systems and their impact on employee performance: the moderating role of employee motivation. Inf. Resour. Manag. J. 33(4), 17–32 (2020) 8. Taufiq, R., Septarini, R.S., Hambali, A., et al.: Analysis and design of decision support system for employee performance appraisal with Simple Additive Weighting (SAW) method. J. Inform. Univ. Pamulang 5(3), 275 (2020) 9. Seers, A.: Team-member exchange quality: a new construct for role-making research. Organ. Behav. Hum. Decis. Process. 43(01), 118–135 (1989) 10. Janssen, O., Prins, J.: Goal orientations and the seeking of different types of feedback information. J. Occup. Organ. Psychol. 80(2), 235–249 (2007) 11. Chen, Z., Lam, W., Zhong, J.A.: Leader-member exchange and member performance: a look at individual-level negative feedback-seeking behavior and team-level empowerment climate. J. Appl. Psychol. 92(1), 202–212 (2007) 12. Athmeeya, H.P., Samartha, V., Tm, R., et al.: Manifestation of idiosyncratic rater effect in employee performance appraisal. Probl. Perspect. Manag. 18(3), 224–232 (2020)
Influencing Factors of Users’ High-Impact Forwarding Behavior in Microblog Marketing Based on Big Data Analysis Technology Yunfu Huo and Xiaoru Xue(B) School of Economics and Management, Dalian University, Dalian, Liaoning, China
Abstract. With the development of the information age, Microblog marketing has become one of the most important marketing methods for companies. In order to enable enterprises to quickly obtain effective information in a relatively short period of time and obtain good marketing results, this article studies the factors affecting users’ high-influence forwarding behavior under Microblog marketing. This paper establishes a multiple linear regression model to study the influencing factors of user attribute characteristics and Microblog text content characteristics on users’ high-influence reposting behavior. Through research, the results show that: authenticated users and male users will promote users’ high-influence forwarding behavior. Marketing microblog involving celebrities and lottery content will also positively affect users’ high-impact forwarding behavior, and URLs will inhibit users’ high-impact forwarding. The text sentiment, @ and pictures of marketing microblog have no significant influence on users’ reposting behavior. Keywords: Information technology age · Microblog marketing · User behavior
1 Introduction According to a statistical report released by the China Internet Information Center in 2021, as of December 2020, the number of Chinese Internet users has reached 989 million. And instant messaging tools such as Microblog have become important information exchange tools in people’s lives. For enterprises, marketing tools are no longer limited to traditional media, and Microblog has quickly become one of the necessary marketing tools for enterprises. The marketing effect is more dependent on the reposting and diffusion of information by Microblog users. Therefore, in this era of rapid information growth, companies need to obtain better marketing results in a relatively short period of time. This requires understanding of the factors that prompt users to forward behaviors, especially paying attention to influencing factors of users’ high-influence forwarding behaviors. Therefore, enterprises can develop more reasonable and efficient marketing strategies to achieve better marketing results. Domestic and foreign researches on influencing factors of user forwarding behavior mainly focus on two aspects: user attributes and microblog content characteristics. Based on user characteristic attributes, user authority and user gender are used as influencing © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 420–426, 2022. https://doi.org/10.1007/978-981-16-5857-0_53
Influencing Factors of Users’ High-Impact Forwarding Behavior in Microblog
421
factors to predict Microblog users’ forwarding behavior, which affects users’ real-time information sharing behavior [1]. Based on Microblog content characteristics, the text sentiment of Microblog has an impact on user reposting behavior to a certain extent. Many factors such as tags and URLs are the main influencing factors of user reposting behaviour [2]; Microblog including hashtag, @ symbol and pictures will also affect users’ reposting behavior. As an important content of social network analysis, influence analysis has now achieved certain research results. Predecessors’ evaluation of users’ high influence under Microblog mainly starts from two aspects: communication effect and content. The spread of Microblog largely relies on interactive behaviors such as forwarding to accelerate the spread of Microblog. Influential users usually get more reposts. In terms of content, it is very important to build a blog post that can attract more users to repost. From the perspective of content characteristics, text content is the most important among all the characteristics of the scale of Microblog reposting. Whether it is a positive word or a negative word, it can make Microblog easier to be reposted. Moreover, users with topics (including #) reposted significantly more than users with very low topic relevance. At the same time, punctuation such as emoticons, question marks and exclamation marks will also improve the user’s influence. However, it is worth noting that blog posts with @ are not easy to be forwarded due to their privacy, so they will not affect the user’s influence. Based on previous research results of high-influence users, this article summarizes and draws lessons from the high-influence forwarding behavior of users. In summary, if any one of the following is satisfied, it can be considered as a user’s high-impact forwarding behavior:(1) The Microblog reposted by the user has been reposted twice or more times. (2) The Microblog reposted by the user is equipped with text, #, emoji or punctuation marks. At present, a large number of scholars have carried out related studies on Microblog marketing strategies and influencing factors of users’ forwarding behaviors, but almost no scholars have further studied the forwarding behaviors with high influence of users under Weibo marketing. Therefore, in order to adapt to the rapid development of The Times, improve the efficiency of enterprise microblog marketing and make the research more targeted, it is particularly necessary to study the influencing factors of users’ high-influence forwarding behavior under microblog marketing.
2 Theoretical Analysis and Hypothesis User reposting behavior is affected by the user’s social attributes and social influence. Studies have shown that authenticated users are highly active and will publish or forward more online content within a certain period of time. Moreover, they are more likely to obtain more social resources, and other users are more inclined to refer to their behaviour [3]. Therefore, based on the above discussion, the hypothesis H1 is proposed: The more authenticated users among the marketing Microblog reposters, the more high-impact reposting behaviors will be generated. Existing research has shown that in online social media, gender does affect the behavior of users in real-time sharing of information. In addition, some scholars found that the more male users accounted for in the reposting network, the more times this Microblog was reposted [4]. Therefore, this article proposes the following hypothesis
422
Y. Huo and X. Xue
H2: The more male users among marketing Microblog reposters, the more high-impact reposting behaviors will be generated. Many scholars believe that the emotional orientation of the text has a certain impact on the user’s behavior. Empirical research on online social media emotions shows that users’ behaviors will be affected by other users’ online emotion expressions [5]. Therefore, in order to verify whether text sentiment can have a certain impact on users’ highimpact forwarding, this paper proposes hypothesis H3: The text sentiment of marketing Microblogs will affect users’ high-impact forwarding behavior. The mention of celebrities in marketing Microblog content will also have an impact on user behavior. Some scholars pointed out that marketing activities involving celebrities such as celebrities have a significant impact on consumers. The celebrity has more followers, which will increase the preference of the brand. Marketing Microblog effectively converts celebrity fans into brand fans, and establishes brand clustering. Therefore, marketing Microblogs containing celebrity content may cause more user interactions [6]. In order to explore this issue, this article proposes the following hypothesis H4: Marketing Microblogs involving celebrity content will promote users’ high-influence reposting behavior. The content of the marketing Microblog contains lottery content, which will also have a certain impact on the behavior of users. Containing lottery content will strengthen interaction with users and stimulate users’ enthusiasm for forwarding. When marketing Microblog contains lottery content, users will feel that Microblog is valuable and hope to share this information with their friends or followers, so they will forward the Microblog [7]. Therefore, the following hypotheses H5: Containing lottery content will encourage users to carry out high-impact forwarding behavior. URLs contained in marketing Microblog content are more likely to be forwarded. Including URL will enhance the richness of Microblog content, improve the credibility of marketing Microblog, and make users better understand the information transmitted by marketing Microblog, thereby promoting users’ high-impact forwarding [8]. In order to verify this issue, the following hypothesis H6: Marketing Microblog with URL will promote users to conduct high-impact forwarding behavior. According to previous studies, the @ logo in marketing Microblog content has little effect on users’ high-impact reposting and ordinary reposting. Although the appearance of @ logo will attract users’ attention, it may not have much relationship with most audiences due to its strong directivity, so it will not significantly affect the forwarding behavior of mass users [9]. Therefore, in order to verify this issue, this article proposes the following hypothesis H7: Containing the @ logo will not affect users’ high-impact forwarding behavior. The inclusion of pictures in marketing Microblog content will also affect users’ forwarding behaviour [10]. Pictures are diagnostic information, have a certain visual impact, and have a certain impact on user behavior. Mitchell found that the use of pictures can provide users with the most intuitive experience, which can strengthen persuasiveness, and pictures will affect users’ decision-making to a large extent. In view of this, this article proposes hypothesis H8: The pictures used in marketing Microblog will affect users’ high-impact forwarding behavior.
Influencing Factors of Users’ High-Impact Forwarding Behavior in Microblog
423
3 Empirical Analysis 3.1 Data Collection and Preprocessing This article uses the Octopus Collector to collect some Microblog marketing information about mobile phone in 2020, with a total of 623 valid data. And systematically captured 104,809 pieces of detailed forwarding information of the above Microblog using python programming. Second, filter the data and delete data that does not have the characteristics of high-impact forwarding behavior. After processing, there are 42142 valid data. 3.2 Research Variables and Model Settings According to the research content and hypothesis, in terms of user attributes, the explanatory variables mainly include user identity authentication and user gender. In terms of Microblog text and content, the explanatory variables mainly include the text sentiment score of marketing Microblog content, whether it contains celebrity effects, lottery content, URLs, @ symbols, and pictures. User identity authentication is represented by the number of authenticated users in high-influential forwarding behaviors, and user gender is represented by the number of male users in high-influence forwarding behaviors. The emotional scoring of information uses machine learning methods, using the Snow NLP library in python. Regarding whether it contains celebrity effects, lottery content, URLs, @ symbols, and pictures, this article is divided according to whether or not it contains this type of feature. The feature is coded as 1, otherwise it is coded as 0. In order to study the above problems, the model is constructed as follows: High − impact = β0 + β1Verified + β2Gender + β3Emotion + β4Celebrity + (1) β5Lottery + β6URL + β7@logo + β8Picture + ε 3.3 Descriptive Statistical Analysis
Fig. 1. User identity figure.
Fig. 2. User gender figure.
Fig. 3. Sentiment figure.
After statistical analysis, there are many ordinary users who conduct high-impact forwarding behaviors in the existing sample data, and authenticated users only account for a small portion. In terms of user gender, the proportion of men is higher than that of women. See Fig. 1 and Fig. 2 for details. In terms of Microblog text, most marketing Microblogs have positive emotions. See Fig. 3 for details.
424
Y. Huo and X. Xue
3.4 Empirical Result Analysis In this paper, after analysis of variance inflation factor (VIF), the results (see Table 1) determine that the variables are reasonable and all variables can be included in the research model. 1 1 − R2i 2 n SSR i=1 yˆ i − y 2 R = = n 2 SST i=1 (yi − y) VIFi =
(2)
(3)
If the VIF value is less than 10, there is no problem of collinearity of variables. Table 1. Results of multiple collinear analysis Variable
VIF
1/VIF
Verified
4.090 0.245
Gender
5.170 0.194
Emotion
1.040 0.959
Celebrity
1.630 0.615
Lottery
2.430 0.412
URL
1.720 0.583
@logo
1.590 0.630
Picture
1.050 0.954
Mean VIF 2.34
Then this article uses statistical analysis software Stata as a tool to analyze the regression results of the model, and the results obtained are shown in the Table 2. Certified users have a significant effect on high-influence reposting behaviors, indicating that in Microblog marketing, the more authenticated users reposting, the more high-influential reposting behaviors may occur, which further enhances the marketing effect. Gender differences will significantly affect users’ high-influential reposting behaviors. Men will have a significant positive effect on users’ reposting behavior with high influence. At the level of Microblog content, the text sentiment of marketing Microblog does not significantly affect users’ high-impact forwarding behavior. Both celebrity effects and lottery content will be significantly used for users’ high-impact forwarding behavior. The celebrity effect has an obvious positive effect on users’ high-influence forwarding behavior. Marketing Microblog with lottery topics will greatly promote users’ highinfluence forwarding behavior. At the level of Microblog text characteristics, URLs to original blog posts will negatively affect users’ high-influence reposting behavior. However, the presence or absence of the @ logo and the presence or absence of pictures do not significantly affect the user’s high-impact forwarding behavior.
Influencing Factors of Users’ High-Impact Forwarding Behavior in Microblog
425
Table 2. Regression analysis results High-impact forwarding Coef.
p-value Sig.
Verified
0.00
*** ***
2.595
Gender
1.495
0.00
Emotion
– .177
.974
Celebrity
16.572
.022
**
Lottery
24.884
0.00
***
URL
–
0.00
***
20.709 @logo
– 2.340
.630
Picture
1.044
.679
Constant
– 8.960
.111
*** p < .01, ** p < .05, * p < .1
4 Conclusions and Recommendations Through research, this paper makes up for the insufficiency of previous studies on the factors affecting user reposting behavior, and further studies the factors affecting users’ highinfluence reposting behavior. It provides new theoretical experience for enterprises to conduct Microblog marketing, helps focus on key information and promotes enterprises to obtain good marketing effects. According to the research conclusions, the following suggestions are put forward for enterprises and marketers when conducting Microblog marketing: in terms of user attribute characteristics, the more authenticated users among forwarding users, the more conducive to the occurrence of high-influence forwarding behavior. Therefore, enterprises should attract more authenticated users to forward, and then expand the effective forwarding volume. Gender has a certain effect on users’ high-influence forwarding behavior, so marketing Microblog should take into account users of different genders in order to obtain greater marketing effects. In terms of the text and content characteristics of marketing Microblogs, although the results of this article show that the text emotions of marketing Microblogs do not affect user behavior, Microblog marketing still needs to output positive information to meet the emotional needs of users. Marketing Microblog posts with URLs may not be forwarded because of the need to perform certain operations, which may cause users to reduce their desire to understand. Therefore, when companies and marketers conduct microblog marketing, they should publish streamlined, novel and targeted blog posts. And companies regularly conduct some interactive activities such as lottery draws in Microblog marketing, and appropriately cooperate with celebrity publicity to attract users, which will achieve better Microblog marketing effects.
426
Y. Huo and X. Xue
References 1. Wang, C., Zhou, Z., Jin, X., et al.: The influence of affective cues on positive emotion in predicting instant information sharing on microblogs: gender as a moderator. Inf. Process. Manage. 53(3), 721–734 (2017) 2. Chen, C.C., Chang, Y.C.: What drives purchase intention on airbnb? perspectives of consumer reviews information quality and media richness. Telematics Inform. 35, 1512–1523 (2018) 3. A method for calculating the influence of microblog users combining users’ own factors and interactive behavior. Comput. Sci. 47(1), 96–101 (2020) 4. Tifferet, S.: Gender differences in privacy tendencies on social network sites: a meta-analysis. Comput. Hum. Behav. 93, 1–12 (2019) 5. Laura, A., Stockdale, M., Sarah, et al.: Bored and online: reasons for sing social media, problematic social networking site use, and behavioral outcomes across the transition from adolescence to emerging adulthood. J. Adolesc. 79–87(2020) 6. Eom, S.J., Hwang, H., Kim, J.H.: Can social media increase government responsiveness? a case study of seoul, korea. Gov. Inf. Q. 35(1), 109–122 (2018) 7. Mao, Y.: Analysis of Weibo marketing strategy in the mobile internet Era. Natl. Econ. Circ. 36, 113–115 (2019). (in Chinese) 8. Shi, W.: Research on Weibo reposting behavior based on content analysis. Inf. Sci. 36(4), 27–31 (2018) 9. Liu, Z., Jansen, B.J.: Questioner or question: predicting the response rate in social question and answering on sina weibo. Inf. Process. Manage. 54, 159–174 (2018) 10. Deng, W.H., Zhang, Y.: Research on the influence of online review information content on phased usefulness evaluation. Inf. Theor. Pract. 41(08), 90–95+153 (2018). (in Chinese)
Analysis of the Impact of Big Data Technology on Corporate Profitability Changsheng Bao(B) School of Economics and Management, Shanghai University of Political Science and Law, Shanghai, China
Abstract. The profitability of Chinese enterprises is directly related to the adjustment of China’s economic structure and the transformation of development mode, and the success of national innovation and transformation and upgrading. This article uses big data to analyze the current status and development trend of Chinese enterprises’ profitability, and analyzes the important factors that affect the profitability. It empirically analyzes the profitability of 40 typical enterprises through big data, and analyze the factors that have a greater impact on the profitability. This paper selects the indicators that have the greatest impact on profitability, and concludes that the factors affecting the profitability of Chinese enterprises are mainly the utilization rate of an enterprise’s total assets, the growth potential, operating capacity and asset structure. Therefore, improving the profitability of Chinese enterprises requires adjusting the capital structure of an enterprise, increasing the utilization rate of total assets, and enhancing the capital operation ability of an enterprise. Keywords: Big data · Profitability · Factor analysis
1 Introduction The profitability of Chinese enterprises is directly related to the adjustment of China’s economic structure and the transformation of development patterns, and affects the success of innovation and transformation and upgrading, especially how to promote the success of China’s supply-side reform, promote national employment and maintain social stability [1]. According to national statistics in 2019, since 2017, the number of enterprises above designated size in China has maintained a slow growth, but the number of loss-making enterprises has increased year by year, from 42,494 in 2017 to 55,722 in 2019. The total profit has also been declining year by year, from 749.1625 billion yuan in 2017 to 6451.6 billion yuan in 2019. This shows that the profitability of Chinese companies is declining. There are many factors that affect the profitability of an enterprise, such as the internal structure of the company, growth, operating capacity, and enterprise size [2]. Chinese scholars have various methods to study the factors affecting the profitability of small and medium-sized enterprises, and the solutions proposed are also expressing their own © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 427–437, 2022. https://doi.org/10.1007/978-981-16-5857-0_54
428
C. Bao
opinions. Some studies non-material factors [3]; some studies analyze the profitability of capital operation and asset management [4]; some believe that the use of financial methods and financial subsidies to improve the profitability of enterprises [5]; Some start with the research on financial indicators of operating conditions and profitability, use regression analysis to analyze the data [6]. However, a complete and mature system has not yet been established. These methods are slightly insufficient for the research on the profitability of Chinese enterprises. In response to this, this paper analyzes and demonstrates the financial indicators of Chinese enterprises based on big data analysis. Based on the financial statements, combined with the logical relationship between the financial project indicators in the table, the relevant indicator system is established to find out the methods that affect the main factors of their profitability.
2 Indicator System Setting and Selection of Sample Enterprises 2.1 Indicator System Setting The indicators of profitability mainly consist of six components: operating profit rate, cost and expense profit rate, security surplus cash multiples, return on total assets, return on equity, and return on capital [7]. In actual research activities, due to the differences in the operating status of each company itself, the research results will also show certain differences. For example, for listed companies, the standard used to judge their profitability is the stock’s earnings per share, price-earning ratio, dividend per share, net assets per share and other measuring indicators. Although different types of enterprises have different evaluation methods, gross sales margin, current ratio, net sales margin and return on total assets, total corporate assets, asset-liability ratio, total asset turnover rate, and operating income growth rate are still the most used benchmark [8]. Analyzing the factors affecting the profitability of an enterprise mainly starts from the enterprise’s internal environment. By analyzing an enterprise’s financial statements and combining its own situation, it can be seen that the main factors affecting the enterprise’s profitability [9]. For example, by analyzing the growth rate of an enterprise’s operating income, the enterprise’s current growth potential can be known. If its growth potential is high, its profitability will be higher, its operating capabilities will be stronger, and its market competitiveness will be stronger. For understanding whether an enterprise can obtain substantial operating income or control operating costs, the research can analyze gross profit margin and net profit margin in order to be able to cut in from the perspective of sales operations and save costs by increasing revenue. In this way, the utilization rate of assets can be improved, and the development of the enterprise can be greatly benefited. 2.2 Introduction of Sample Enterprises This paper selects 40 companies from various fields: Jiangsu Susheng Automation Equipment Co., Ltd, Zhejiang Daming New Material Joint Stock Co., Ltd, Hengcheng Tools, BOSTER Biological Technology Co., Ltd, Liaocheng Guangyuan Precision Machinery Manufacturing Co., Ltd, Dongguan Ruiyuan Instrument Co., Ltd, Weihai Boyang
Analysis of the Impact of Big Data Technology on Corporate Profitability
429
Ultrasonic Instrument Co., Ltd, Harbin Dong ‘an Hydraulic Machinery Co., Ltd, Qingdao Fengguang Precision Machinery Co., Ltd, Golcom Energy Construction, Dalian Huayang Sealing Co., Ltd, China Geokon Instruments Co., Ltd, Impact Scientific Instruments, Sanlian Pump, Shandong Wantong Hydraulic Co., Ltd, Chengde Yingke Fine Chemical Co., Ltd, KYKY Technology Co., Ltd, Yangzhou Deyun Plastic Technology Co., Ltd, Hebei Shangzhen New Material Technology Co., Ltd, Huangshi Huibo Material Technology Co., Ltd, Shiny Materials Science & Technology Inc, Hangzhou Xianglong Drilling Equipment Technology Co., Ltd, Wuxi Juli Heavy Industry Co., Ltd, Xingtai Lantian Fine Chemical Co., Ltd, Suzhou Xianglou New Material Co. Ltd, Lanzhou Weite Welding Material Technology Co., Ltd, Hubei Heqiang Machinery Development Limited by Share Ltd, Sanying Precision Engineering, Zhuhai Longtec Co. Ltd, Sichuan Central Inspection Technology Co., Ltd, Wuxi Coal Mining Machinery, Shandong Dingsheng Electromechanical Equipment Inc, Shandong Sinolion Machinery Corp. Ltd, Wuhan Wanbang Laser Dimond Tools Co., Ltd, Nantong Gaoxin Antiwear Technology Co., Ltd, Zhangjiagang Tianle Rubber & Plastic Technology Co. Ltd, ZHEDA JINGYI, Dier Chemical Packing, Zhuhai Changxian New Material Technology Co., Ltd, Zhongchao New Material Corp. This paper will analyze the impact of various indicators on profitability of the above-mentioned 40 sample companies’ financial indicators in 2019.
3 An Empirical Analysis of the Factors Influencing the Profitability of Chinese Enterprises Taking the financial and accounting statements of the above 40 companies as sample data, a sample survey is conducted, and a data model is developed based on the measurement indicators discussed in the previous section, and a comprehensive analysis of the level of profitability of Chinese companies is carried out to draw conclusions. 3.1 The Establishment of a Factor Analysis Model Factor analysis refers to the analysis of the internal structure of the matrix through the relevant data of the variables, selecting a few random variables that can be controlled, and then studying the relationship between these variables. In general, it is necessary to group the analysis variables into categories, and divide the part with higher correlation in one category, so that the correlation between variables in different groups will be reduced. Thus, a certain type of variable by using that division method can be used as an essential factor or a representative of a basic structure. In the previous section, the analysis index structure of the factors influencing the profitability of Chinese enterprises was established, and 9 financial and accounting report data were extracted from 40 Chinese enterprises to study the level of profitability of Chinese enterprises. Regarding the construction of data models as quantitative analysis, the main reason is that the number of independent variables in these data may affect the correlation between indicators, and the factor molecular model can reduce this interference factor to a certain extent, thereby it can screen out the correct judgment indicators that can affect the profitability of small and medium-sized enterprises.
430
C. Bao
3.2 Data and the Processing Selecting 40 Chinese companies and obtaining or calculating 9 financial indicators in the indicator system of profitability factors through their 2019 annual reports, and performing calculation processing. The SPSS software is used for factor analysis of the above data, and the analysis process is as follows: 3.2.1 KMO Measure Table 1. KMO and Bartlett Measure get adequate samples for Kaiser-Meyer-Olkin measure
.598
Bartlett Test of Sphericity
177.521
The approximate chi-square df
36
Sig
.000
Source: Calculated based on survey data of sample companies
It is generally believed that if the KMO value is less than 0.5, it is not suitable for factor analysis. The results in Table 3 show that the KMO value is 0.598, which is suitable for factor analysis, and Bartlett Test of Sphericity is significant, indicating that the data can be subjected to factor analysis and the validity test is passed (Table 1). 3.2.2 Descriptive Statistics First putting the analyzed data into SPSS software for standardization, and then following “Analysis” →”Describe statistics” →”Description” to output the standardized data and descriptive statistics, as shown in Table 2. 3.2.3 Factor Analysis First of all, through the above model process, the indicator data in 9 associated matrix diagrams can be obtained. It can be concluded that these 9 indicators in the correlation matrix must have a certain degree of correlation, indicating that the data and information contained in them must also overlap. Therefore, the main influencing factors can be obtained through factor analysis for dimensionality reduction. Secondly, selecting the main components. In the common factor variance table. It can be seen that the degree of commonality between the variables is relatively high, indicating that most of the information in the variables is effective for factor analysis. Table 3 is a table for explaining the total variance. The table has 9 components, and 3 of the eigenvalues exceed 1, and the rest of the eigenvalues do not exceed 1. The variance column is the percentage of the factor feature value to the total feature value, and the cumulative column is the cumulative percentage of the variance of each factor
Analysis of the Impact of Big Data Technology on Corporate Profitability
431
Table 2. Descriptive statistics N
Mean value
Standard Deviation
Minimal value
Maximal Value
Liquidity ratio
40
68.183
14.218
39.258
96.625
Gross profit margin
40
30.938
22.934
−73.060
70.850
Sales margin
40
−8.098
38.646
−165.440
35.820
Return on total asset
40
0.458
13.411
−53.424
23.522
Return on equity
40
5.316
29.695
−69.620
108.510
Asset-liability ratio
40
39.850
21.891
3.800
89.330
Total assets
40
176250193
146787416
11052100
528126600
Turnover rate of total assets
40
69.458
45.480
10.200
197.109
Operating income growth rate
40
16.657
144.195
−67.820
887.988
Table 3. Explaining the total variance table Initial eigenvalue % of Accumulat variance ion % 37.205 37.205
Compo nents
Total
1
3.348
2 3 4 5 6 7 8
1.821 1.199 .989 .677 .482 .247 .172
20.238 13.325 10.987 7.525 5.351 2.748 1.910
57.443 70.767 81.755 89.279 94.630 97.378 99.289
9
.064
.711
100.000
Extract the sum of squares and load % of Accumula Total variance tion % 3.348 37.205 37.205 1.821 1.199
20.238 13.325
57.443 70.767
Rotate the sum of square and load % of Accumula Total variance tion % 2.982 33.136 33.136 2.163 1.224
24.033 13.597
57.170 70.767
to the total variance. The explanatory variable of the sum of the total variance of the first three common factors is 70.767%, which means that the variance of these three common factors accounts for 70.767% of the variance of all factors, so the first three factors are extracted as the main factors. The principal component analysis method is used to calculate the initial factor loading matrix, and then the factor rotation is performed, and the factor is rotated by the orthogonal rotation method with the maximum variance, and Table 4 is obtained. It can be seen from the rotated factor loading matrix that the main component one high loading factor indicators include net sales margin, gross sales margin, return on total assets, and return on net assets, which can be named the total asset profit factor. The main component two high-load indicators include asset-liability ratio, total corporate assets, total asset turnover, and operating income growth rate. This indicator can be named the
432
C. Bao Table 4. Rotation component matrix a Components 1
2
3
Liquidity ratio
−.057
.168
.905
Gross profit margin
.560
−.482
.449
Sales margin
.927
.010
−.047
Return on total asset
.933
.112
−.063
Return on equity
.791
.227
.039
Asset-liability ratio
−.185
.676
.180
Total assets
.150
.529
−.272
Turnover rate of total assets
.432
.566
.003
Operating income growth rate .258
.748
.297
Extraction method: main component Rotation method: Orthogonal rotation method with Kaiser standardization
operating profit factor. The main component three high-load indicators have a current ratio, which can be named the asset structure profit factor. 3.2.4 Calculate Factor Score Coefficient Matrix Using regression method to get the factor score coefficient matrix, see Table 5. Table 5. Component score coefficient matrix Components 1
2
3
Liquidity ratio
−.062
.044
.741
Gross profit margin
.239
−.320
.380
Sales margin
.333
−.089
−.063
Return on total asset
.325
−.039
−.082
Return on equity
.228
.175
−.015
Asset-liability ratio
−.141
.346
.120
Total assets
.006
.260
−.255
Turnover rate of total assets
.096
.236
−.037
Operating income growth rate .007
.330
.201
Analysis of the Impact of Big Data Technology on Corporate Profitability
433
Extraction method: main component. Rotation method: Orthogonal rotation method with Kaiser standardization. Source: Calculated based on survey data of sample companies. 3.2.5 Calculating Factor Composite Scores Z1 = −0.062X1 + 0.239X2 + 0.333X3 + 0.325X4 + 0.228X5 − 0.141X6 + 0.006X7 + 0.096X8 + 0.007X9 Z2 = 0.044X1 − 0.320X2 − 0.089X3 − 0.039X4 + 0.175X5 + 0.346X6 + 0.260X7 + 0.236X8 + 0.330X9 Z3 = 0.741X1 + 0.380X2 − 0.063X3 − 0.082X4 − 0.015X5 + 0.120X6 − 0.255X7 − 0.037X8 + 0.201X9
Among them, X1 , X2 , …….X9 represent nine financial indicators, and Z1 , Z2 , Z3 represent the scores of principal components. The contribution rate corresponding to each principal component is taken as the weight, and after the weighted average of the principal component scores, the comprehensive performance evaluation function of the sample company can be obtained. That is, the comprehensive performance function is: ®∈ / Z=
33.136%Z1 + 24.033%Z2 + 13.597%Z3 70.767%
Integrating the weights of the Z1, Z2, and Z3 components on the profitability of industrial enterprises, a comprehensive ranking of the factors affecting the profitability of Chinese enterprises is obtained. According to the above empirical analysis results, it can be seen that Dong ‘an Hydraulic Machinery has the highest profitability among the 40 Chinese companies.
4 Conclusion and Suggestion According to the factor model, there are three main factors affecting the profitability of Chinese companies. The most important factors affecting the profitability of Chinese companies are the first two factors. They are the total asset profitability factor (Z1) and the operating profitability factor (Z2). It shows that if Chinese companies want to improve their profitability, they need to make improvements mainly based on these two factors. The Z3 component also has a certain impact on the profitability of a company, but its influence is relatively weak and can be used as a reference indicator. According to the result of factor model analysis, the financial indicators included in the Z1 component include net sales margin, gross sales margin, return on total assets, and return on net assets. The Z1 component occupies the most important part of the final model and is the most influential factor. These financial indicators included in Z1 mainly represent the ability of a company to obtain income from all assets. If a Chinese company in China wants to improve its profitability, the first thing to do is to study how to improve the efficiency of the company’s total asset utilization. By analyzing the gross profit margin and net profit margin of sales, enterprises can understand whether they have created enough sales revenue or failed to control cost reduction, so as to increase revenue from a sales perspective, save capital use, etc., improve asset utilization efficiency, and
434
C. Bao Table 6. Comprehensive scores and rankings of factors Z1
Z2
Z3
Z
Dong ‘an hydraulic machinery
0.97
4.35
2.06
2.33
1
Xianglong drilling equipment
0.86
1.10
0.16
0.81
2
Impact scientific instruments
0.80
0.65
0.60
0.71
3
Central inspection
1.45
−1.03
1.71
0.66
4
Shiny materials
0.30
0.97
0.12
0.50
5
Zhongchao new material
−0.03
1.50
−0.02
0.49
6
Deyun plastic technology
0.27
1.41
−0.76
0.46
7
Tianle rubber & plastic
1.47
−0.33
−0.68
0.44
8
Guangyuan precision machinery manufacturing
0.24
0.56
0.41
0.38
9
Heqiang machinery
0.49
−0.23
0.82
0.31
10
Sinolion machinery
0.68
0.02
−0.21
0.28
11
Golcom
0.10
−0.27
1.11
0.17
12
Dingsheng electromechanical equipment
0.42
−0.15
0.03
0.15
13
Geokon instruments
0.36
−0.35
0.46
0.14
14
Sanlian pump
0.23
0.42
−1.01
0.06
15
Daming new material
0.32
−0.30
−0.04
0.04
16
Huibo material
0.04
0.12
−0.16
0.03
17
Hengcheng tools
0.38
−0.66
0.31
0.01
18
Fengguang precision machinery
0.76
−0.67
−0.69
0.00
19
Wuxi coal mining machinery
0.21
−0.42
−0.04
−0.05
20
KYKY
0.21
0.10
−0.94
−0.05
21
Changxian new material
0.31
−0.26
−0.58
−0.05
22
Longtec
0.45
−0.67
−0.19
−0.06
23
Shangzhen new material
0.17
−0.53
−0.05
−0.11
24
−0.08
0.89
−2.09
−0.14
25
Boyang ultrasonic instrument
0.17
−1.26
0.93
−0.17
26
Xianglou new material
0.00
−0.15
−0.68
−0.18
27
Weite welding material
−0.06
−0.66
0.21
−0.21
28
Lantian fine chemical
−0.21
0.08
−0.86
−0.24
29
ZHEDA JINGYI
Yingke fine chemical
−0.43
−1.16
1.78
−0.26
30
Dier chemical packing
0.14
−0.07
−1.63
−0.27
31
Wanbang laser dimond tools
0.44
−0.61
−1.53
−0.29
32
(continued)
Analysis of the Impact of Big Data Technology on Corporate Profitability
435
Table 6. (continued) Z1 Gaoxin antiwear
0.59
Z2
Z3
Z
−1.37
−0.70
−0.32
33
Wantong hydraulic
−0.31
−0.19
−0.95
−0.39
34
Sanying precision engineering
−1.30
−0.53
1.82
−0.44
35
BOSTER
−1.69
−0.15
1.53
−0.55
36
Xianglong drilling equipment
−0.92
−0.94
0.44
−0.67
37
Huayang sealing
−1.16
−0.39
−0.06
−0.69
38
Ruiyuan instrument
−2.87
−0.09
0.94
−1.19
39
Susheng automation equipment
−3.74
1.23
−1.56
−1.63
40
make their profitability increase. It can be seen from Table 6 that Tianle Rubber & Plastic has the highest Z1 score, which shows that Tianle Rubber & Plastics has a high utilization efficiency of its total assets. The company mainly produces automobile-related rubber and plastic products. In recent years, the automobile industry has been one of the important industries in China. The automobile parts industry has benefited from the rapid development of the automobile industry and has also achieved rapid development. The company continues to focus on its main business, and has brought certain benefits to the company through research and development of new products, opening up new users, etc., thereby improving the company’s profitability. The Z2 component is the second most important indicator of this model. The financial indicators it contains include asset-liability ratio, total corporate assets, total asset turnover, and operating income growth rate. It can be seen that these indicators indicate that the profitability factors of Chinese companies are related to the business capabilities of Chinese companies. This result shows that strengthening the business capacity of an enterprise can improve its profitability. At present, the growth potential, capital structure, and operating capabilities of Chinese enterprises in China have a great impact on their profitability. The good growth of an enterprise means that the enterprise has a higher level of profitability, and its business performance is more prominent and the market competitive advantage is more obvious, so it has a good potential for sustainable development. Unreasonable capital structure and unbalanced corporate debt will severely restrict the improvement of enterprises’ profitability. Reasonable arrangements for corporate financing and liabilities are crucial to improving enterprises’ profitability. At the same time, good operating capability is a prerequisite for enterprises’ profitability. The rapid turnover of enterprise assets, reasonable asset structure allocation, and high production efficiency are conducive to higher sales profits and enhanced debt solvency. It can be seen from Table 6 that Dong ‘an Hydraulic Machinery has the highest score in the Z2 component. This shows that Dong’an has achieved certain results in improving its own business capabilities, which is a good promotion for its improvement of profitability.
436
C. Bao
Combining Z1, Z2, and Z3, and the proportion of business components in the analysis of corporate profitability, a comprehensive ranking of the profitability factors of 40 Chinese companies can be obtained. The research results show that Dong ‘an Hydraulic Machinery has the highest profitability among these 40 Chinese companies. From the 2016 annual report of Dong ‘an Hydraulic Machinery, it can be seen that its current ratio, return on equity, debt-to-asset ratio, total asset turnover, and operating income growth rate in 2016 were the highest, so its asset structure, capital structure, operational capability and growth are the strongest among the 40 companies. Although its gross sales margin, net sales margin and return on total assets are not the highest, it is also higher than that of other 39 companies. Therefore, the company is the most profitable company among the 40 companies. According to the above research conclusions, improving the profitability of Chinese companies should focus on three aspects: The first is to increase the utilization rate of enterprises’ total assets. When making investment decisions, enterprises should try their best to choose projects with less investment, quick results, and high returns to ensure the rapid turnover of funds. The financing of funds should also be based on demand. Excessive or premature borrowing will make the funds idle, increase the interest burden, and cause a waste of money. Enterprise management should also make effective overall planning in terms of product structure, quality, operation, and work efficiency, so as to achieve the best state of enterprise management and improve capital utilization. The second is to adjust the capital structure of enterprises. Chinese enterprises should carry out diversified financing methods to carry out portfolio matching in order to balance the ratio between equity and debt financing. If the structure and combination of capital and corporate liabilities are unbalanced, it may curb the development of corporate profitability. Therefore, to improve corporate profitability, it is necessary to adjust the structural relationship between capital and liabilities, which is crucial. When an enterprise chooses to obtain the source of enterprise funds to open up new markets or expand the size of the enterprise’s operation and development, it should also consider the enterprise’s own business model and capital structure, otherwise it is likely to backfire and cause the enterprise’s operating level of retreat. The third is to enhance enterprises’ capital operation capabilities. Chinese enterprises can speed up the turnover of working capital by improving production technology and increasing labor productivity. The faster the capital turnover, the higher the efficiency of capital utilization, which means that enterprises can produce more products, obtain more income, obtain more profits, and therefore increase profitability. Chinese enterprise can also adjust the allocation of asset structure to improve profitability by adjusting the ratio of current assets to fixed assets. For example, an enterprise can use a larger share of capital on fixed assets with higher profitability, thereby increasing the overall profitability.
References 1. Zhou, S.: Analysis and research of corporate profitability. Bus. Econ. 07, 45–46 (2013) 2. Cheng, P.: Several issues that should be paid attention to the analysis of enterprise profitability. Market Modernization. 03, 97 (2007)
Analysis of the Impact of Big Data Technology on Corporate Profitability
437
3. Li, J.: Analysis of factors affecting corporate profitability. Bus. Econ. 07, 57–58 (2009) 4. Yang, X., Bing, H.: Analysis of factors affecting the profitability of small and medium enterprises. Technol. Dev. Enterp. 34(08), 102–104 (2015) 5. Sun, Y.: Analysis of factors affecting corporate profitability. J. Shanxi Univ. Finance Econ. 02, 62 (2011) 6. Guan, X.: Research on the Profitability of Chinese Steel Companies. Tianjin University, Tianjin (2012) 7. Ni, Y.: Enterprise profitability analysis. Co-Oerativeconomy Sci. 20, 41–42 (2007) 8. Hua, M.: Research on the Profitability of Listed Companies in China’s Steel Industry. Hefei University of Technology, Hefei (2013) 9. Ren, X., Xie, Z., Shen, Y.: Analysis and Outlook-China’s Small, Medium and Micro Enterprises Development and Survival Report, pp. 14–42. Economic Press China, Beijing (2017)
Cigarette Data Marketing Methods Based on Big Data Analysis Tinggui Li(B) Hunan Provincial Tobacco Company Huaihua City Company, Huaihua, Hunan, China
Abstract. The purpose of this paper is to study how to effectively collect all kinds of consumption information that can be detected by big data on the basis of existing data, and provide support for cigarette marketing and marketing of tobacco industry for precision marketing. In this paper, based on the big data analysis of cigarette data marketing methods and strategies to carry out research, first of all, elaborated the impact and value of big data on cigarette data, then analyzed the methods used in the marketing strategy under the background of big data, and finally analyzed the marketing strategy of Huaihua company of Hunan tobacco company. The results show that the overall performance of Huaihua tobacco company’s cigarette marketing strategy is good, and its brand marketing maintains above 20%. Keywords: Strategic studies · Big data analysis · Cigarette data · Data marketing
1 Introduction Tobacco industry is a special industry, which is not only the integration of government and enterprises, but also the monopoly operation. It can be said that the policy and legal environment have a vital impact on the tobacco industry. As a special monopoly industry, tobacco industry is restricted by many policies and regulations. Based on a lot of research on marketing theory and consumer behavior theory at home and abroad, this paper puts forward a marketing strategy suitable for tobacco industry. This paper puts forward a marketing strategy of tobacco industry based on big data analysis, and carries out in-depth research combined with practical work. Big data technology is an important basis for the current cigarette data marketing, and relevant scholars have done a lot of research in this direction. With the rapid development of science and technology, computer technology has become an indispensable resource in people’s daily life. It has changed the way people work and live. With the advent of the era of big data, traditional marketing can no longer meet the market demand. With the help of big data in Beijing, enterprises can obtain accurate market economy and management information, and provide important data analysis for enterprises. In order to change the marketing strategy of enterprises through the analysis of big data [1]. In order to evaluate the operation status of cigarette market scientifically and effectively, y Xing, X Huang, X Dong and D Wang established an intelligent evaluation model of © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 438–444, 2022. https://doi.org/10.1007/978-981-16-5857-0_55
Cigarette Data Marketing Methods Based on Big Data Analysis
439
cigarette market operation status by using big data technology and machine learning algorithm. The development of the model can be divided into four stages: data cleaning and standardization; data processing and data processing; the selection of characteristic value of principal component analysis; modeling and training of distributed parallel computing architecture based on spark; verify and optimize the training mode. The brands of “Furongwang (hard)” and “Baisha (hetianxia)” produced by Hunan Tobacco Co., Ltd. The results show that the model can accurately and timely reflect the market situation and development trend of single cigarette brand. The prediction accuracy of the above two cigarette brands is more than 90%, and the prediction value is basically consistent with the actual value. This method provides technical support for cigarette market trend prediction and marketing decision [2]. This paper takes Huaihua tobacco company as the research object, analyzes its tobacco marketing data in 2020, and simply combs the cigarette data marketing methods on the basis of big data, and give the relevant analysis. This paper will give full play to the advantages of artificial intelligence technology in combination with big data, so as to improve the effect of cigarette data marketing [3].
2 Research on Cigarette Data Marketing Strategy Based on Big Data Analysis 2.1 Value of Big Data to Cigarette Data Marketing The traditional marketing concept is a static marketing mode, according to the basic attributes of consumers, such as gender, age, brand preference, purchase mode and so on. Integrating the types of consumers and formulating corresponding marketing strategies, big data marketing is characterized by dynamic marketing, which saves the marketing cost and improves the guidance of marketing. Big data has brought new changes to the data market of cigarettes. It is mainly reflected in resource concentration, product innovation and inspection effect [4]. 2.2 Development of Cigarette Data Marketing On November 13, 2009, the “China” brand precision marketing working conference was held in Shanghai. As the beginning of China’s tobacco precision marketing, since the implementation of cigarette precision marketing, it has effectively promoted the growth of cigarette sales in the industry [5, 6]. 2.3 The Marketing Change of Big Data on Cigarette Data Big data is a powerful assistant of enterprise marketing activities, which can help enterprises analyze the needs of potential customers, save marketing costs, and point out the marketing direction. In other words, the ability to master and track the big data of the consumer market can reflect the market situation for the first time, master the latest progress of consumers, provide the latest consumption suggestions for consumers,
440
T. Li
and optimize the consumption experience. The company can achieve industry calibration through its own data, and adjust the marketing strategy dynamically in real time to ensure the best marketing effect. Find out the company’s development goals and formulate corresponding marketing strategies [7]. 2.4 Cigarette Marketing Method Based on Big Data 2.4.1 FHIMA Algorithm The implementation of fhima algorithm is divided into two parts: one is to scan the initial database, delete the impossible items and reduce the search space based on the theory that not all the supersets of items are used frequently or effectively; Secondly, the prefix range algorithm is used to reduce the generation of infrequent candidates, and the calculation methods of utility value and quality upper limit are designed to further reduce the search space and improve the efficiency of the algorithm. The specific algorithm is shown in formula (1): the search space is reduced and the search efficiency is improved. A(G, t)+ riu w, op (1) Ye wYS
u,i
Assuming that the root node is x, then the whole subtree is constrained by the upper bound of X. that is fre(F)/|Y |, according to the carrier anti monotone modulation, the upper bound of utility value is calculated as shown in Eq. 1. The utility value of W can be added to the utility values of all elements in the w extension set to obtain the upper bound of relative utility value. Mass value is the weighted sum of support and relative utility. According to the upper limit of support and relative utility, a relatively loose upper limit of mass value can be obtained. (2) A(F, X ) + riu w, op fre(F) + XeT
The upper limit of mass calculated by Eq. (2) is a relatively loose upper limit, that is, there will be many candidate nodes in the excavation process, so a formula for calculating the compact upper limit is needed. 2.4.2 MIFS Algorithm The selection algorithm based on mutual information feature subset is called MIFs algorithm [8, 9]. In the process of feature selection, H (k) evaluation function based on mutual information is widely used to describe the content of feature information. Mutual information based on information entropy is a nonparametric nonlinear function of quantitative correlation feature evaluation criteria. For candidate features h ∈ H , the standard function is Eq. (3): ƛ
(3)
H (g, K, O) is an evaluation function for candidate K, O label sets and selection feature O this evaluation function is mainly used to measure the correlation between K and O generally speaking, B is a factor that can change the importance of H (s, O).
Cigarette Data Marketing Methods Based on Big Data Analysis
441
3 Research on the Strategy Direction of Cigarette Data Marketing Based on Big Data Analysis 3.1 Overview of the Marketing Model of Cigarette Companies Under Big Data The marketing center of Huaihua company of Hunan tobacco company is responsible for the sales and service management of cigarettes under the jurisdiction of Huaihua City. With the change of cigarette management mode, the level of cigarette supply and sales control is also changing [10, 11]. At present, there is a good balance between market demand and operability, but Huaihua market has a large area, including urban market and rural market. The level of social and economic development is high, and the social and economic development of rural market is relatively slow. For the complex and changeable regional market environment, it is necessary to dynamically analyze the demand scale. The consumption structure and brand tendency of regional markets need to control the consumption law of the market, ensure the scientific distribution of cigarette products, and meet the needs of consumers at all levels of the market. 3.2 Cigarette Consumer Population Positioning Under Big Data The majority of smokers are between 18 and 30 years old, and these are young people who have just entered the society. Their consumption potential will increase with their age and working hours [12, 13]. Most consumers with more than 10 years of smoking age are between 31 and 50 years old. The tobacco age also kept increasing; consumers aged 31–40 accounted for 36.89%. At present, consumers are relatively active in work, economy, social interaction and health status. They are the largest consumer group in the cigarette market, accounting for 31.91% of consumers aged 18–30.
4 Current Situation of Cigarette Data Marketing Strategy Based on Big Data Analysis 4.1 The Marketing Strategy Analysis of the Cigarette Company 4.1.1 Brand Strategy Analysis The goal of brand strategy is to better realize the sales target of cigarettes, enrich the product specification range of various brands of cigarettes, and guide the cigarette consumption market. The first step of brand marketing strategy is to determine the accurate scope. From the cigarette brand catalog of Huaihua tobacco company in 2020, 10 specifications of Baisha (hetianxia), furongwang (hard), Yunyan (soft big heavy nine), Yuxi (soft), Liqun (new version), Zhonghua (shuangzhongzhi), Zhonghua (hard), Guiyan (kuayue), Nanjing (xuanhemen), Huanghelou (hard wonder) were selected to determine the scope of precision marketing brands. In the process of brand marketing operation, on this basis, gradually expand the scope of cigarette brand precision marketing. The following is the analysis table of brand specifications, quantity, price and inventory in 2020 (Table 1). Through the analysis of commercial sales, commercial inventory, social inventory, market price and retail sales, the annual health status of brands was evaluated (Fig. 1);
442
T. Li Table 1. Analysis of brand specification
Brand
Brand (specification)
Hunan (Province)
White Sand (Tianxia)
Hunan (Province)
Hibiscus King (Hard)
Yunnan
Cloud Smoke (soft nine)
Commercial sales in 2020 (box)
Retail price (uan/package)
2020 Commercial Inventory (box)
1325
100
1
28989
25
1
135
100
0
Yunnan
Yuxi (Soft)
656
23
1
Zhejiang
Liqun (New Edition)
6093
15
6
Shanghai
China (Double Central Branch)
275
55
11
Shanghai
Zhonghua (Hard)
1339
45
11
Guizhou (Province)
Guiyan Kuayue
956
25
6
Jiangsu (Province)
Nanjing (xuanhemen)
1434
17
118
Hubei (Province)
Huanghelou (hard wonder)
450
30
0
Percentage
120% 100% 80% 60% 40% 20% 0%
Coverage
Trade Name Fig. 1. Analysis of marketing cigarette brand
From Fig. 1, it can be concluded that the brand specifications of Huaihua company in 2020 basically maintain good, and from the perspective of brand coverage, brand
Cigarette Data Marketing Methods Based on Big Data Analysis
443
repurchase rate and sales growth rate, it can be concluded that the overall annual brand marketing maintains more than 20%, and is on the rise. 4.1.2 The Strategic Analysis of the Consumer Groups The consumption trend of cigarettes can be analyzed according to the smoking amount of consumers in 2020, as shown in Fig. 2:
1 Year 60% 45%
50%
Percent
40%
35% 31%
30%
2-5 Years
10 Years
6-10 Years
51% 32% 26%
31%
26% 20%
20%
24% 14%11%
10% 0% 1--5
6--10
Year
10--20
>20
Fig. 2. Consumption groups of smoking capacity
By comparing the amount of smoking of different age groups, it can be seen that the amount of smoking of consumers aged less than one year is concentrated in five cigarettes; the number of cigarettes consumed by consumers aged between 2 and 5 years was mainly 6–10 cigarettes; the number of cigarettes consumed by consumers aged 6 to 10 years was concentrated in 10 to 20 cigarettes; more than 20 cigarettes were consumed by consumers who had been smoking for more than 10 years. Therefore, combined with the above analysis, it can be concluded that the brand marketing strategy of Huaihua market is basically mature, and the brand acquisition and supply strategy of tobacco companies in its market is basically in line with the market demand.
5 Conclusion Based on the analysis of big data, this paper studies the methods and strategies of cigarette data marketing. Firstly, it expounds the influence of big data on cigarette data and value, and then analyzes the marketing methods under the environment of big data. The marketing strategy of Huaihua Tobacco Company is analyzed. The results show that the overall cigarette marketing strategy of Huaihua Tobacco Company is good, and the brand marketing is more than 20% .
References 1. Zhao, S., Ma, J.: Research on precision marketing data source system based on big data. Int. J. Adv. Med. Commun. 7(2), 93–100 (2017)
444
T. Li
2. Xing, Y., Huang, X., Dong, X., et al.: Research and application of intelligent assessment model for operation status of cigarette market. Tob. Sci. Technol. 51(7), 96–102 (2018) 3. Liu, Y.: Research on the marketing strategy of rural characteristic tourism based on the analysis of big data. J. Phys. Conf. Ser. 1744(4), 042081 (2021) 4. Du, L., Yang, X.: Research on “Marathon + Tourism” industry integration strategy based on big data analysis from the perspective of global tourism. J. Phys.: Conf. Ser. 1648(2), 022148 (5pp) (2020) 5. Tawalbeh, L.A., Mehmood, R., Benkhelifa, E., et al.: Mobile cloud computing model and big data analysis for healthcare applications. IEEE Access 4(99), 6171–6180 (2017) 6. Bo, T., Zhen, C., Hefferman, G., et al.: Incorporating intelligence in fog computing for big data analysis in smart cities. IEEE Trans. Industr. Inf. 13(5), 2140–2150 (2017) 7. Kamilaris, A., Kartakoullis, A., Prenafeta-Boldú, F.X.: A review on the practice of big data analysis in agriculture. Comput. Electr. Agric. 143, 23–37 (2017) 8. Bostean, G., Crespi, C.M., Vorapharuek, P., et al.: E-cigarette specialty retailers: data to assess the association between retail environment and student e-cigarette use. Data Brief 11, 32–38 (2017) 9. Liu, M., Bin, L.I., Yin, D., et al.: Temperature data pretreatment of cigarette static burning by parallel displacement of Fermat point. Tob. Sci. Technol. 50(3), 73–79 (2017) 10. Fearon, I.M., Eldridge, A., Gale, N., et al.: E-cigarette nicotine delivery: data and learnings from pharmacokinetic studies. Am. J. Health Behav. 41(1), 16–32 (2017) 11. Drovandi, A., Teague, P.A., Glass, B., et al.: A systematic review of smoker and non-smoker perceptions of visually unappealing cigarette sticks. Tob. Induc. Dis. 16(January), 1–11 (2018) 12. Jawad, M., Lee, J.T., Glantz, S., et al.: Price elasticity of demand of non-cigarette tobacco products: a systematic review and meta-analysis. Tob. Control 27(6), 689–695 (2018) 13. Mitik, M., Korkmaz, O., Karagoz, P., Toroslu, I.H., Yucel, F.: Data mining approach for direct marketing of banking products with profit/cost analysis. Rev. Socionetwork Strat. 11(1), 17–31 (2017). https://doi.org/10.1007/s12626-017-0002-5
Development of an Information Platform for Integration of Industry-Education Based on Big Data Analysis Technology Songfei Li, Shuang Liang(B) , and Xinyu Cao School of Economics and Management, Shenyang Institute of Technology, Shenyang 113122, Liaoning, China
Abstract. The integration of industry-education is an important link for higher education to adapt to the development of the times and improve the quality of talent training. Vigorously developing the integration of industry-education is an inevitable trend of higher education and the fundamental of the development of the times. In the process of developing the integration of industry-education, how to standardize the process of integration of industry-education, objectively evaluate the effectiveness, and visualize data analysis is an urgent problem to be solved. This paper builds a industry-education integration information platform based on big data analysis technology, and uses information technology to realize the agreement signing of industry-education integration, process management of industryeducation integration, evaluation of the results of industry-education integration, and big data analysis and statistics for the integration of industry-education. The development of informatization provides a solution. Keywords: Big data · Integration of industry-education · Information platform
1 Introduction Accelerating the quality of talent training is an important instruction of the country to higher education. Whether talent training can be closely integrated with the country’s development situation is an important assessment criterion for the quality of higher education teaching. Deepening the connotation of talent training is an urgent task for higher education [1]. It is of great significance to cultivate new drivers of economic development. In the process of cultivating talents, schools have gradually realized the importance of school-enterprise cooperation and the integration of industry-education, and vigorously promoted the integration of industry-education to ensure that graduates can meet the needs of social development. At present, in each school is mainly based on offline businesses [2–5]. The process of integration of industry and education is not transparent. It has not formed a portrait of the integration of industry-education at the whole school level and the enterprise level in terms of talent training, teacher construction, and scientific research cooperation [6]. There is a lack of scientific evaluation on the evaluation of the effectiveness of industry integration. Based on this, it is imperative © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 445–450, 2022. https://doi.org/10.1007/978-981-16-5857-0_56
446
S. Li et al.
to build a platform for integration of industry-education based on informatization big data technology. Colleges and universities can use the industry-education integration data assets in the industry-education integration platform to portray the entire school and enterprise’s industry-education integration portrait, so that the process of integration of industry-education can be standardized, transparent, data analysis visualization, and effectiveness evaluation comprehensive.
2 Design Ideas of the Integration of Industry-Education Information Platform Through the study of national policies and the research on the integration of industryeducation in schools, the integration of industry-education should be able to meet the three core requirements of the signing of agreements, the process management and the effectiveness analysis. The functional architecture of the integration of industryeducation information platform is shown in Fig. 1.
Application layer
Green agriculture
Telemedicine
Smart home
Smart transportation
……
Cloud computing platform Network layer
Mobile communication network
Internet
Network Management Center
Sensor
GPS
Camera
RFID
Reader
Intelligent Terminal
Perception layer
Fig. 1. The functional architecture of the integration of industry-education information platform
Taking into account that the system needs to meet different user permissions, different data permissions, good scalability, dynamic configuration, and global connectivity in the school, the system adopts a microservice architecture with separation of front and back ends, and supports both Web and mobile. In the business application layer, the agreement management service mainly provides the template type configuration of the school-enterprise cooperation industry-education agreement, such as the schoolenterprise cooperation framework agreement, the school-enterprise order training agreement, etc.; supports the online signing of various industry-education agreements through
Development of an Information Platform for Integration of Industry-Education
447
the process engine, and supports online editing of agreements; support to provide renewal reminders for signed agreements; support online query of agreements, and support authorized users to have the right to download the agreement. Industry-education integration process management services in-depth industry-education integration of various cooperative business levels, including 10 dimensions: joint construction of industry-education, joint construction of teaching resources, construction of training/experiment/innovation bases, collaborative education, dual-teacher training, and scientific research cooperation, student employment, scholarship/bursary support, student business practice and social services. Provide data import and export services at each level of cooperation and coconstruction; support the addition, deletion, modification and multi-level review of the performance of industry-education. The process management of the integration of production and transportation has realized the standardization and transparency of the process management of industry-education, and accumulated data assets of the integration of industry-education, which can provide support for subsequent big data analysis. Industry-education integration evaluation management supports different schools to set up industry-education integration evaluation models according to different needs by constructing evaluation models. It can quickly build in-depth cooperative enterprise evaluation models that meet school requirements and quantify the integration of industryeducation in various colleges, departments, and majors. The evaluation model realizes scientific evaluation based on big data analysis and statistics. The big data service can provide the effect portrait of the school’s integration of industry-education and the portrait of the integration of industry-education of each enterprise, support the two-level drill-down analysis service of data, and support the viewing permissions of different levels of portraits. Big data services can show the development of the school’s integration of industry-education at all levels and at multiple levels, and provide data decision support for the healthy and high-quality development of the integration of industry-education. The process center service supports online setting of processes, configuration of process attributes, and services such as reminding and copying, so that the management of industry-education business can be standardized.
3 Performance Evaluation of the Integration of Industry-Education and Big Data Analysis 3.1 Assessment of the Effectiveness of Integration of Industry-Education The performance evaluation of requires the establishment of an evaluation system. According to actual needs, the school can construct the annual industry-education integration effectiveness evaluation of the secondary colleges, the industry-education integration evaluation of various majors, and the industry-education integration evaluation of each enterprise according to the evaluation model service set up by the system according to actual needs. The core process of the evaluation model is shown in Fig. 2. This platform has prefabricated data embedding points for effectiveness evaluation in 10 dimensions in the management of the integration of industry-education, such as the amount of money received in scientific research cooperation, the level of the subject and other information. According to the results, it is possible to clarify the
448
S. Li et al.
Set conditions
Rule management
Set indicator
Set field
Variable management
Set rules
Algorithm
Rule calculation
Field condition
Design calculation
Fig. 2. Production-education integration evaluation model process
effectiveness of the integration of industry-education in each secondary college and each major, and to clarify the depth of the integration of industry-education in each enterprise, such as general cooperation, close cooperation, and in-depth cooperation. Through the evaluation of the effectiveness, it can set a benchmark effect and make the integration of industry-education achieve sound development. 3.2 Big Data Analysis of Integration of Industry-Education The big data statistics of the integration of industry-education mainly include two levels: First, analyze the number of school-enterprise cooperation, the number of industrial colleges, the number of new companies in the year, the type of company analysis, the geographical analysis of the company, and the analysis of the company’s industry from the perspective of the school. Drilling analysis can analyze various data statistics of each secondary college and each major; the second is to build a portrait of the integration of industry-education of each enterprise. When constructing the corporate portrait, the basic information of the company is embedded in the agreement signing management service, such as the company name, date of establishment and other basic information; in the management of the integration of industry-education, the company name, cooperation time and other content are prefabricated; in the company the evaluation batches of enterprises are prefabricated in the evaluation. Through the above-mentioned data source burying point and evaluation model calculation, the use of big data technology through collection, cleaning, processing and analysis can finally form a multi-dimensional portrait [7]. This company portrait can clearly show the basic information of the company, such as the start time of school-enterprise cooperation, the school-enterprise cooperation items completed each year, and the annual school-enterprise cooperation evaluation
Development of an Information Platform for Integration of Industry-Education
449
grades. Through enterprise cooperation portraits, the direction, key points and shortcomings of school-enterprise cooperation can be fully and objectively displayed, laying the foundation for subsequent development.
4 Application Value of the Integration of Industry-Education Information Platform In terms of technology, cloud computing and big data professional groups are gradually emerging in the establishment of schools, and the development and improvement of emerging technologies in the application of schools are constantly improving and upgrading [8]. In terms of concept, the challenge of big data technology in the application of colleges and universities is the transformation of thinking, which is both big data thinking and big data concept. The development of big data technology in universities must combine the three elements of data, technology, and thinking [9]. The development of big data in university education management depends on the expansion of big data resources, the development of big data technology, and the formation of big data thinking and concepts [10]. Therefore, establishing the concepts of data openness, data sharing, data cross-border, and data cooperation is the prerequisite and difficulty for the healthy development of big data education management in my country’s universities. The application value of the industry-education fusion information platform is as follows. First, a standardized agreement process and standard templates for various types of agreements have been formulated, which is conducive to standardized management of the signing of agreements for integration of industry-education. Second, in the process of the efficiency of data collection and data review has been improved, and finally standardized and usable data assets have been formed. Third, the one-key quantitative scientific evaluation model provides accurate evaluation services for schools, school-enterprise cooperation offices, and secondary colleges, so that each department can fully understand the effectiveness of the department’s integration of industry-education. Fourth, big data statistical services can provide multi-dimensional, different levels of analysis services and portraits, can accurately evaluate the development results of industry-education cooperation, and optimize their own integration of industry-education.
5 Conclusion The information platform for it based on big data technology has fully realized the online processing, circulation and review of offline business of the integration of industryeducation by using information technology. While accumulating performance data, the dynamic industry-education fusion evaluation model and big data statistical analysis have made the industry-education fusion data valuable, and the big data analysis will further enable the high-quality development of the integration of industry-education. For some schools with a foundation for construction, part of the integration data of industry-education may have been accumulated in various systems, and data can be connected through the data center to complete the collection of integration data of industry-education.
450
S. Li et al.
References 1. Hadiana, A., Ginanjar, A.: Designing interface of mobile parenting information system based on users’ perception using Kansei engineering. J. Data Sci. Appl. 1(1), 10–19 (2018) 2. Wang, Y., Min, S., Wang, X., et al.: Mobile-edge computing: partial computation offloading using dynamic voltage scaling. IEEE Trans. Commun. 64(10), 4268–4282 (2016) 3. Sabella, D., Vaillant, A., et al.: Mobile-edge computing architecture: the role of MEC in the Internet of Things. IEEE Consum. Electr. Mag. 5(4), 84–91 (2016) 4. Mao, Y., Zhang, J., Song, S.H., et al.: Stochastic joint radio and computational resource management for multi-user mobile-edge computing systems. IEEE Trans. Wireless Commun. 16(9), 5994–6009 (2017) 5. Kazimierski, W., Wlodarczyk-Sielicka, M.: Technology of spatial data geometrical simplification in maritime mobile information system for coastal waters. Pol. Marit. Res. 23(3), 3–12 (2016) 6. Shi, W., Jie, C., Quan, Z., et al.: Edge computing: vision and challenges. Internet of Things J. IEEE 3(5), 637–646 (2016) 7. Ke, Z., Mao, Y., Leng, S., et al.: Mobile-edge computing for vehicular networks: a promising network paradigm with predictive off-loading. IEEE Veh. Technol. Mag. 12(2), 36–44 (2017) 8. Ananthanarayanan, G., Bahl, P., Bodík, P., et al.: Real-time video analytics: the killer app for edge computing. Computer 50(10), 58–67 (2017) 9. Liu, G., Fei, S., Yan, Z., et al.: An empirical study on response to online customer reviews and e-commerce sales: from the mobile information system perspective. Mob. Inf. Syst. 2020(83), 1–12 (2020) 10. Ahmed, E., Ahmed, A., Yaqoob, I., et al.: Bringing computation closer toward the user network: is edge computing the solution? IEEE Commun. Mag. 55(11), 138–144 (2017)
Reform of Student Information Management Thinking and Methods Supported by Big Data Technology Zhentao Zhao(B) Liaoning Jianzhu Vocational College, Liaoyang, Liaoning, China
Abstract. With the reform and merger of colleges and universities, the development of large-scale and comprehensive, student information management has become the basic management content of each school. Traditional student information management methods can no longer meet the increasing needs of students. There are many mature general-purpose systems on the market, but each school has its own unique management style. Therefore, it is an urgent problem to design and complete student information and curriculum management according to the characteristics of the school. The purpose of this article is to study the reform of student information management thinking and methods supported by big data technology. This article first summarizes the basic theories of big data technology, and then extends the core technology of big data, and combines the problems and shortcomings of contemporary college student information management methods in our country, and uses big data technology to conduct student information management methods. This article systematically expounds the database design, demand design and function design of the student information management system under big data technology. And through questionnaire surveys, field surveys and other research methods to carry out experimental research on the theme of this article, the research shows that compared with the traditional student information management thinking and methods, the student information management system based on big data technology has more excellent performance, has been recognized by most teachers and students, and fully reflects the feasibility of the system designed in this article. Keywords: Big data technology · Learning information management · System design · Change research
1 Introduction With the merger and expansion of colleges and universities, the number of students in various colleges and universities has greatly increased. Traditional student management, teaching management, and employment management have encountered new problems and challenges [1, 2]. The student information management system is an auxiliary management software developed for a large number of school business processing tasks, © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 451–459, 2022. https://doi.org/10.1007/978-981-16-5857-0_57
452
Z. Zhao
and is mainly used for the management of school student-related information [3, 4]. The overall task is to realize the systematization, scientification, standardization and automation of student information relations [5, 6]. Foreign universities generally have high technical means to develop systems and provide corresponding services and technical support [7, 8]. The informatization construction of domestic colleges and universities started relatively late, but in recent years, in the process of gradual realization of informatization construction, colleges and universities have also continuously developed and implemented various teaching and daily management information systems, forming a certain scale of information Chemical system [9, 10]. Looking at the current domestic research status, it cannot fully meet all the needs of users. Therefore, student daily management software should fully realize the security and sharing of students’ daily information, so that the traditional student daily management work will develop in the direction of digitization and intelligence, and lay a good foundation for the further realization of a complete student information system [11, 12]. The purpose of this paper is to improve the information management ability of college students, and to study the reform of student information management thinking and methods supported by big data technology. The traditional student information management ability is combined with the university student information management system based on big data technology. Comparison to analyze the feasibility of the student information management system studied in this article.
2 Applied Research on the Reform of Student Information Management Thinking and Methods Supported by Big Data Technology 2.1 Big Data Technology In the context of the information age, with the rise of new media such as digital terminals, cloud services, Weibo, and WeChat, information data is also showing an exponential growth trend. At present, there is no unified definition of big data, but the academic circles basically believe that it mainly refers to software and hardware tools, and data related to the perception, collection, processing, management, and service of the assembly. It has the characteristics of low value density, diversity, scale and high speed. Can be divided into structured data, semi-structured data and unstructured data. 2.2 Design and Analysis of Student Information Management System Based on Big Data Technology (1) Analysis of overall user needs The functions that need to be completed are mainly reflected in: 1) System user management and authority Different types of users are required to log in, and log in to different operation pages according to different permissions. Students log in initially according to their student ID, and teachers and administrators also deal with it accordingly.
Reform of Student Information Management
453
2) Student status information Able to enter, modify, and delete basic information of students, and students themselves can also query and modify secondary information. 3) Management of course content You can add student information according to the student’s course selection, and associate the course with the student. 4) Performance management You can enter the student scores of different courses, and students can also check their own scores for each semester and previous semesters. 5) Other related content Course-related, grade-related information update, background information management, etc. Student management is mainly embodied in the management of student enrollment information and course scores after enrollment. The main function is to complete data storage, retrieval, and entry. (2) Database design The operation of data mainly includes query and maintenance. Compared with the user, the query data can be searched for a certain keyword to obtain the desired result, and the data can also be statistically sorted to meet the user’s own needs. Maintaining data refers to deleting unnecessary data, adding some new data, modifying the changed data, and then improving the database. The main purpose of controlling data is to ensure the safety and integrity of the data. The main operations include data storage control and data concurrency control. Commonly used relational operations are combined operations based on data relationships and conditions: including query operations, modification, deletion, and addition. The diversity of queries and the ability to express conditions are the most important part. (3) Modular design 1) Authority management subsystem The system is divided into three types of user and authority interfaces: super management user interface; ordinary administrator interface; student user interface. 2) Student management subsystem The student management subsystem includes five functions: student change, student change information management, financially difficult student management, scholarship management, student loan management, etc. Student change: realize the function of student change. The administrator first enters the student’s student ID for query, if the student does not exist, then return to continue to enter the query, otherwise enter the next step, you need to select the semester, type (withdrawal, suspension, resumption), remarks, and then click the “submit” button. Student change information management: Collect student change information, and realize basic functions such as adding, modifying, and deleting. Management of Students in Financial Difficulties: Realize the management of financially difficult students’ financial aid. Including: key information such as
454
Z. Zhao
semester, poverty level, funding status, etc. Award and bursary management: realize the management of student scholarships and bursaries. Student loan management: realize the management of student loan. 3) Student work team management subsystem The student work team management subsystem includes two functions: management regulations and activity management. The management regulations mainly realize the management of the relevant management regulations of the student work team, and the activity management mainly realizes the management of the usual activities of the student work team. 4) Mail management subsystem The mail management subsystem includes four functions: view messages, create new messages, outbox, and trash can. It is similar to a web mailbox and can meet the basic needs of users for communication on the site. (4) Analysis of background management functions 1) Basic information management Management is not only the query function, but also has a large authority, which can directly add, delete, and modify source data to avoid misoperation. 2) Course management The content of course management is more complicated, including the association between courses and students. After students select courses based on time, teachers, classrooms and other elements, they need to add the information to the student’s course-related table in time. 3) Course inquiry Students can view the list of courses they have taken, and the addition of compulsory course information requires the administrator or teacher to operate. 2.3 Application of Clustering Algorithm in Student Performance Evaluation Student performance is the most important part of the student information management system, an important part of teaching quality evaluation, and an important measure of whether a student has a good grasp of the professional knowledge. In view of this, this section wants to use the principle of multivariate statistical analysis to comprehensively evaluate student performance through the method of cluster analysis. Suppose we divide the existing n-course data of m students into t classes, and the score of the j-th course of the i-th student is T. Then the average grade of the J course is xj = The sample range is
1 m x η−1 m
(1)
Rj = Maxxη |−Min|xη
(2)
xη = xη − xj /RJ
(3)
The standardized result is
Reform of Student Information Management
455
When a student’s grades are too high or too low, the range will increase significantly, and the weight of the class’s grades will decrease. At this time, the influence of individual and accidental factors is too great. For this reason, the sample standard deviation Sj is used instead of the range Rj , namely 1 1 m Sj = [ (xη − xj )2 ] 2 (4) j=1 m−1 Then the standardized score is
xη = WJ • xη
(5)
3 Experimental Research on the Reform of Student Information Management Thinking and Methods Supported by Big Data Technology 3.1 Subjects (1) In order to make the experiment in this article more scientific and effective, this experiment conducted a survey of students by going deep into a local university and issuing questionnaires. This time, it will focus on traditional student information management and student information management based on big data technology. The system conducts comparative analysis, and analyzes the obtained data using the analytic hierarchy process. (2) In order to further analyze the university student information management system supported by big data, this experiment conducted face-to-face interviews with teachers. This time, 30 teachers were interviewed, and the teaching age was more than three years. Ensure the scientificity of the experimental data. The gender ratio of teachers in this interview is equal. 3.2 Research Methods (1) Questionnaire survey method In this experiment, a targeted questionnaire was set up by asking relevant experts, and the questionnaire was conducted in a semi-closed manner, the purpose of which is to promote students to fill in correctly. (2) Field research method In this study, by going to a local university, observing the status quo of student information management, and recording data, these data provide a reliable reference for the final research results of this article. (3) Interview method In this experiment, we conducted face-to-face interviews with relevant teachers and students and recorded the results of the interviews. The results of the interviews were sorted and analyzed. These data not only provided theoretical support for the topic selection of this article, but also provided data support for the final research results of this article. (4) Mathematical Statistics Use relevant software to make statistics and analysis on the research results of this article.
456
Z. Zhao
4 Experimental Analysis of the Reform of Student Information Management Thinking and Methods Supported by Big Data Technology 4.1 Comparative Analysis of Student Information Management Methods This experiment conducted a survey of students by going to a local university and using a questionnaire survey. This experiment conducted a comparative study on the traditional student information management method and the student information management thinking system supported by big data technology. The data obtained As shown in Table 1. Table 1. Comparative analysis of student information management methods Student information
Teaching information
Employment Information
Admission Information
Big data
68.1%
65.3%
63.7%
62.1%
Traditional
49.3%
53.4%
55.7%
56.3%
Percentages
Big data
Traditional
80.00% 70.00% 60.00% 50.00% 40.00% 30.00% 20.00% 10.00% 0.00% Student information
Teaching information
Employment Information
Admission Information
Informations
Fig. 1. Comparative analysis of student information management methods
It can be seen from Fig. 1 that compared to the traditional student information management thinking and methods, the student information management system supported by big data technology has better performance in multiple information management aspects, especially in student-based information management. It is nearly 20% more
Reform of Student Information Management
457
efficient than the traditional student information management thinking mode, which fully reflects the excellent performance of the student information management system based on big data technology designed in this article and the traditional student information management thinking mode urgently needs to be reformed. 4.2 Performance Analysis of Student Information Management System Based on Big Data Technology In order to further study the changes in student information management thinking and methods supported by big data technology, this experiment conducted face-to-face interviews with teachers on the student information management system based on big data technology and used ten points for this experiment. The system is used to express the teacher’s degree of recognition, where 1 means disagreement, and 1 means identity. The data obtained is analyzed using the analytic hierarchy process, and the data obtained is shown in Table 2. Table 2. Performance analysis of student information management system based on big data technology Convenience Accuracy Robustness Safety Teacher1
8
7
5
5
Teacher2
7
6
7
6
Teacher3
9
7
8
6
Teacher4
9
5
5
5
Teacher5
8
4
5
8
Teacher6
7
8
6
8
Teacher7
7
7
5
4
Teacher8
9
7
7
8
Teacher30 9
6
8
5
…
It can be seen from Fig. 2 that most teachers agree with the student information management system based on big data technology, especially in the convenience of viewing information. The unanimous approval of teachers fully demonstrates the feasibility of a student information management system based on big data technology.
458
Z. Zhao
Satisfactions
Convenience
Accuracy
Robustness
Safety
10 9 8 7 6 5 4 3 2 1 0
Teachers Fig. 2. Performance analysis of student information management system based on big data technology
5 Conclusion College education is gradually networked and informationized, and management methods are gradually being replaced by information management systems supported by big data technology. The management style of each school is different, and the realization of the function of the student information management system is also very different. This article has developed and completed a college student information management system to manage student information and course results. The system is based on big data mining technology, realizes the application of various types of service platforms and development platforms, and greatly improves the functions of student campus information.
References 1. Alameri, I.A., Radchenko, G.: Development of student information management system based on cloud computing platform. J. Appl. Comput. Sci. Math. 11(2), 9–14 (2017) 2. Yang, P., Sun, G., He, J.: A student information management system based on fingerprint identification and data security transmission. J. Electr. Comput. Eng. 2, 1–6 (2017) 3. Xiuming, L., Huaisheng, W.: Design and implementation of university student information management system of three tier architecture. Comput. Times 000(010), 95–98 (2018) 4. Oguguo, B., Nannim, F.A., Agah, J.J.: Effect of learning management system on student’s performance in educational measurement and evaluation. Educ. Inf. Technol. 26(2), 1–13 (2021) 5. Mohammad, R., Mazaheri: Bed information management system think aloud usability testing. J. Adv. Pharm. Technol. Res. 9(4), 153–157 (2019)
Reform of Student Information Management
459
6. Studiyanti, S., Azmi, S.: Usability Evaluation and design of student information system prototype to increase student’s satisfaction (Case Study: X University). Ind. Eng. Manage. Syst. 18(4), 676–684 (2019) 7. Shi, Q.: Design and implementation of student information management system based on B / S and C / S. C e Ca 42(3), 1054–1058 (2017) 8. Sarhan, L.I., Atroshi, A.M., Ahmed, N.S.: A strategic planning of developing student information management system using SWOT technique. J. Univ. Hum. Dev. 2(3), 515 (2016) 9. Akomolafe, D., Olanipekun, K., Bello, O.: A Multi-channel cloud based student information management system. Br. J. Math. Comput. Sci. 14(4), 1–17 (2016) 10. Oyebola, B.O., Olabisi, K.O., Adewale, O.S.: Fingerprint for personal identification: a developed system for students attendance information management. Am. J. Embed. Syst. Appl. 6(1), 1 (2018) 11. Humadi, N., Jaafar, M.S., Shahrom, M.: Managerial implications of student activity information system implementation at faculty of business and management. Int. J. Serv. Manage. Sustain. 5(1), 41 (2020) 12. Alkhateeb, M.A., Abdalla, R.A.: Factors influencing student satisfaction towards using learning management system moodle. Int. J. Inf. Commun. Technol. Educ. Off. Publ. Inf. Res. Manage. Assoc. 17(1), 138–153 (2021)
Application of Big Data Technology in Marketing Practice Under the Background of Innovation and Entrepreneurship Xia Hua(B) , Jia Liu, and Hongzhen Zhang Haojing College of Shaanxi University of Science and Technology, Xi’an, Shaanxi, China
Abstract. In the context of the current education reform, improving students’ innovation and entrepreneurship ability has become one of the ultimate goals of cultivating talents in universities. In today’s data, information related to the development of many enterprises, but also determines the future growth rate of enterprises and planning business applications. Big data (BD) plays an increasingly important role in analyzing consumer behavior and planning strategies. However, the gap between the reality and the ideal is obvious, and the current education practice system is not perfect and the implementation is not enough. Therefore, we should attach importance to the cultivation of students’ innovation and entrepreneurship ability, and comprehensively upgrade and reform the marketing practice activities by using BD technology. By reading and analyzing a lot of data, BD management is realized, and the Internet is used for marketing practice to ensure the good development of marketing practice. Experiments show that people are very satisfied with the practical application of BD technology, and the satisfaction of BD marketing practice strategy under the background of innovation and industry is as high as 76.9%. This article analyzes the fundamental difference between new marketing and traditional marketing under the influence of BD analysis era In the context of innovation and entrepreneurship, so as to provide reference for academic circles and professionals, and apply it to practice. Keywords: Innovation and entrepreneurship · Big data technology · Marketing · Innovation and development
1 Introduction Innovation and entrepreneurship education and learning is the inevitable requirement of the development of socialist market economy, and is also the focus of current development. It is one of the important goals of education innovation learning to improve the innovation and entrepreneurship ability of college students. However, it is difficult to achieve this goal. It is very difficult for us to improve students’ entrepreneurial ability from theoretical knowledge. We need to cultivate innovation consciousness and entrepreneurship concept in practice, integrate resources with knowledge, material objects, funds and talents, innovate ideas, transform ideas into productivity, integrate © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 460–467, 2022. https://doi.org/10.1007/978-981-16-5857-0_58
Application of Big Data Technology in Marketing Practice
461
innovation into products and services, realize its value, and promote social and economic development. Therefore, innovation and entrepreneurship cannot do without practice; the problems encountered in practice can improve students’ ability to deal with emergencies. At present, the practice of education is stagnant due to the imperfect system and implementation problems, and it is in urgent need of comprehensive reform [1–3]. Since China seized the third wave of information technology reform, China’s comprehensive strength is constantly improving, and its international status is also gradually improving. Nowadays, most small and medium-sized enterprises seize the Internet technology and gradually complete the development and transformation. Therefore, in the development of modern small and medium-sized enterprises, it has long been inseparable from BD management. Using the Internet to carry out marketing practice activities is of great help to the development of small and medium-sized enterprises. Therefore, how to grasp the Internet technology for marketing practice has become the most important problem [4–6]. The amount of information of BD is rich. By reading and analyzing a large number of data, we can find out the useful information contained in it. BD technology has both advantages and risks for marketing companies. Enterprises must seize this opportunity and actively face the possible problems. In order to optimize the company’s marketing methods, effective measures must be implemented to protect consumers’ privacy information, improve the adaptability of marketing practice activities, and formulate marketing practice strategies in a planned way. The application of BD analysis technology will help to improve the scientificity and efficiency of marketing practice under the background of innovation and entrepreneurship [7–10].
2 Overview of Big Data Technology and Marketing Practice 2.1 Big Data Technology BD technology is a process based on data acquisition, data analysis and real-time data presentation. BD technology is applied to data analysis in related fields. The application of BD technology provides intuitive data display, which can timely reflect the change of data, and then make corresponding adjustment through the change degree reflected by the data to ensure the accuracy of the whole data structure and data analysis. The concept of BD itself is relatively abstract. It is mainly in view of the basic data and unit data to effectively collect, and then realize the periodic work of data upload, data integration and data analysis, and analyze the effectiveness of the entire data information. Today, the United States, Europe and other countries on the Internet of things technology application is more mature, the application field is also more extensive. China has also gradually attached importance to the role in the field of product production, and invested relevant funds to support the development process of the Internet of things technology. 2.2 Marketing Overview Practice BD is an abstract concept. The definition of BD has not been unified. BD refers to the massive data involved, which cannot be extracted, processed, mined and managed
462
X. Hua et al.
by ordinary software tools in a reasonable time, and become the information to help enterprises plan. Wikipedia describes it as: the scale of data is so large that it can’t be managed manually, forming valuable information that can be interpreted by human beings. BD is a way of thinking and method. Marketing model has roughly experienced three stages from traditional marketing, network marketing to BD marketing. In the traditional marketing, the marketing practice activities take place in the entity space, consumers can experience the products truly, and the market environment has regional differences and variability. Network marketing enables consumers to purchase global goods through digital products on the Internet. The consumption concept changes from product centered to personalized and diversified needs of consumers. Since 2000, with the improvement of computer information exchange, storage and processing capacity, as well as the development of BD technology, a new marketing method emerges as the times require. It enables consumers to deeply mine a lot of information left by online shopping. Enterprises can predict consumer behavior through BD analysis, customize products for consumers, and bring efficient return on investment. Cost plus pricing method or target profit pricing method is commonly used in marketing to sell goods. It is a pricing method for an enterprise to determine a target profit rate according to the expected total sales volume and total cost. The specific calculation formula is as follows: P = C · (1 + R)
(1)
P = (C + G)/S
(2)
D = T + Fw + Vw + S
(3)
3 Experimental Ideas and Design At present, many enterprises have not realized the arrival of the era of BD, and their marketing practices have lagged behind. In the process of development of these enterprises, they have focused on how to improve productivity and production quality. As for marketing practice, they are still in the way of TV advertising and newspapers, which makes many products even if they are manufactured No one wants to buy it. This limitation makes the development of enterprises frustrated, greatly restricting the development of enterprises. In the era of BD, the most important thing is the application of technology and data. In our country, many enterprises have not realized the importance of data. Compared with some foreign large companies, they have started slowly, and the reserves of technology and talents are not perfect from abroad, which also makes our competitiveness less than that of some developed countries. In the era of BD, data collection is the most important. Enough data can improve the competitiveness and development of enterprises. But at present, there are still many enterprises for this aspect is not enough attention, in the market research, researchers are always perfunctory, for the customer information is not enough understanding. In this case, many enterprises in the market competitiveness is becoming smaller and smaller, but also gradually lost the ability to
Application of Big Data Technology in Marketing Practice
463
innovate. In the design, this paper adopts two forms: questionnaire survey and field interview. 996 citizens were opted as the survey sample. The interviewees include students, ordinary people and professionals. The aim of this study is to explore the application of BD technology in marketing practice in the context of innovation and entrepreneurship.
4 Discussion 4.1 Current Situation Analysis of Big Data Technology in Marketing Practice Under the Background of Innovation and Entrepreneurship According to the experimental results, we investigated and analyzed the results, and the results are shown in Table 1. In the social management of enterprises, the planning of marketing practice activities is a very important preparatory work. Therefore, many enterprises in the marketing practice planning time stage will continue for a long time, a marketing practice planning period will span many years of development road. However, with the era of BD, such long-term planning is no longer applicable. The situation of the economic market is changing rapidly. The influence of the marketing practice of enterprises in the BD environment makes many unexpected situations appear in the marketing, which will lead to the inconformity of the plan with the reality. With the development of modern enterprises, there are more and more forks, and there will be a lot of uncertainty in the choice of future prospects of enterprises, which requires that the planning period of marketing practice should be shortened with the pace of BD era. Table 1. Problems in enterprise marketing practice under the background of big data Era Investigation factors
Recognition ratio
Not enough attention to marketing practice
75.6
Lack of new technology
74.1
Ignore the importance of market research
82.9
There is no integration of innovation and entrepreneurship education concept
90.4
In addition, this paper investigates the public’s satisfaction with the practical application of BD technology in marketing under the background of innovation and entrepreneurship, as shown in Fig. 1. It can be seen from Fig. 1 that people are very satisfied with the practical application of BD technology in marketing under the background of innovation and entrepreneurship. The traditional means of marketing promotion mostly rely on multimedia, that is, through television advertising and elevator advertising to convey information and promote goods. However, with the coverage of BD, the marketing practice pays more and more attention to the two-way interactive mode, that is, in the process of promotion; it is not only the unilateral output information of enterprises, but also the response of consumers. The booming Internet and smart phones are high-quality choices for two-way interaction. Through the reform and
464
X. Hua et al.
improvement of promotion strategies, enterprises can continuously use the advantages brought by BD to increase brand benefits and develop greater advertising value for themselves. Therefore, for the marketing practice, it is a correct choice to innovate the traditional promotion strategy combined with the era of BD. Percentage of satisfaction degree
Very satisfied
Quite satisfied
commonly
dissatisfied
Fig. 1. People’s satisfaction survey on the practical application of big data technology in marketing under the background of innovation and entrepreneurship
This paper further investigates the basic framework of BD marketing practice, and obtains the framework diagram of BD marketing practice, as shown in Fig. 2. Marketing includes market knowledge, marketing and marketing knowledge. People sell through marketing and buying. Nowadays, the traditional marketing concept is no longer applicable, and a new marketing model has emerged. This paves the way for the great change of marketing concept and the application of BD. It promotes market development and marketing practice, and improves the operating efficiency of enterprises. Companies can improve the corresponding information, improve marketing efficiency and create more profits. Enterprises must further mine and analyze their sales data, take the most influential factors as the classification criteria, and achieve more detailed grouping. According to different categories and companies, different marketing practice strategies can be formulated to maintain and manage customer relationships. 4.2 Application Direction of Big Data Technology in Marketing Practice Under the Background of Innovation and Entrepreneurship (1) Curriculum construction should integrate the concept of innovation and entrepreneurship education Colleges should create “public innovation space” as the carrier of “mass innovation” education to satisfy the needs of students’ learning and education. Space takes “production, learning and research” as the platform, focuses on the role of computer and information technology, and provides an appropriate platform for students’ entrepreneurs. In the marketing process, the application of innovation and entrepreneurship concept plays a part in promoting economic development. The concept of innovation and entrepreneurship education plays a part in students’ learning of marketing methods. It is promoted on the basis of continuous innovation and has high application value.
Application of Big Data Technology in Marketing Practice
465
Optimization of advertising monitoring creativity Draw user profile and locate target group
Precise push program purchase Big data marketing practice Improve user experience, customer relationship management
Channel optimization, open online and offline marketing
Insight and research to develop new markets
Fig. 2. Basic framework of big data marketing practice
(2) Increase the relationship between customers and enterprises With the Internet, people take note of purchase goods on the Internet. In the context of BD, enterprises can make the best of the Internet, develop marketing tools suitable for online sales, increase customers and ameliorate the total profit margin. Before making marketing strategy, we should do a good job in market research, and also need to fully understand the needs of consumers. In the era of BD, we should bring into full play to the advantages of BD and understand the consumption level of consumers. Through BD technology, formulate marketing strategies suitable for regional consumption. (3) Strengthen the teaching of practical training Lead students to the front line of business sales, experience the status and role of BD in marketing, and deeply understand the guiding role of BD in marketing activities. Regular guidance unit training, professional teachers should also go to more companies to understand the company’s marketing department’s application of BD, so as to spread the latest information to students in the classroom learning process. We should also try to build a three-dimensional learning interaction between the company and the school, so that students can apply the knowledge learned in the theoretical class to the practice of the company. (4) The creation of “smart” marketing in the process of marketing In the marketing process, it is essential to set up a specific objective production service standard, combine the key experience in the current production process, effectively analyze the whole production process, effectively integrate important production risk factors, and fully analyze the idea of maintaining production stability in the production process. For instance, through the extensive control of production data, Internet of things technology, cloud computing and other technical content analysis, to form a specific automatic production system with production response
466
X. Hua et al.
capacity, so as to ameliorate the technical requirements of automatic analysis of product production data.
5 Conclusions Innovation and entrepreneurship education mode can not only educate students, but also cultivate their practical skills and professional quality, and improve their work adaptability. So as to ameliorate the students’ ability of entrepreneurship and innovation, it is also to cultivate their good work quality. In a word, with the era of BD, the economic system of the market is also constantly changing, and most enterprises are paying attention to the economic system of the market in the new era in the future. Therefore, enterprises must be properly transformed and adjusted to meet these challenges. The traditional marketing model is greatly affected, and enterprises must change and adjust, which has an impact on the enterprises themselves. The company must adjust its marketing practice plan according to its own situation. BD technology not only attacks the market, but also brings a lot of opportunities. It is necessary for enterprises to grasp the opportunity and formulate marketing strategies in the era of BD. We can also introduce new talents, improve the marketing practice level of enterprises at this stage, and then improve the operation efficiency of enterprises by means of improving new marketing practice strategies and increasing the degree of attention to customers. Acknowledgments. Teaching reform project of education department of Shaanxi province: research on the teaching reform of international trade based on PBL model -- taking the course of export commodity exhibition and negotiation as an example, project No.:19BY142. School-level Teaching reform project: research on the optimization of practical teaching system and teaching content reform of marketing major independent colleges from the perspective of innovation and entrepreneurship, project No.:19JGY02.
References 1. Li, X.: Research on innovation and entrepreneurship education under the background of big data. Value Eng. 036(026), 159–160 (2017) 2. Xiumei, L.: Approaches to innovation and entrepreneurship education in universities under the background of big data innovation and entrepreneurship education in colleges and universities under the background of big data. J. Qiqihar Univ. (Philos. Soc. Sci. Ed.) 000(012), 149–151 (2018) 3. Liu, J., Liao, C.X., Zhang, L.S.: Research on the incentive mechanism of innovation and entrepreneurship in higher vocational colleges. DEStech Transactions on Social ence Education and Human ence, (icesd) (2020) 4. Park, C.W.: A study on the effect of entrepreneurship and self-efficacy on knowledge management: focusing on female CEO. Asia-Pac. J. Bus. Ventur. Entrepreneurship 11(6), 11–26 (2016) 5. Yancheng, Q., Mengyun, M.: Research on teaching mode of college students’ innovation and entrepreneurship in environmental design under the background of “Internet+.” Design 000(011), 71–72 (2018)
Application of Big Data Technology in Marketing Practice
467
6. Huali, F., Shunyu, G., Ya, W.: Research on promoting college students’ innovation and entrepreneurship ability under the background of mass entrepreneurship and innovation research on the path of improving college students’ innovation and entrepreneurship ability under the background of “mass entrepreneurship and innovation.” Res. Pract. Innovation Entrepreneurship Theor. 001(008), 119–120 (2018) 7. Gaofeng, C.: Research on enterprise innovation management mode under the background of big data. Theor. Pract. Innovation Entrepreneurship (2018) 8. Dong-Shui, J., Chao, L., Li-Li, L.I., et al.: A research on the application strategies of new media in university entrepreneurship education and practice. J. Heb Univ. Eng. (Soc. Ed.) (2018) 9. Liang, W., Chen, L., Rui, G.: Review on the development and research of marketing practice teaching of China. J. Anhui Univ. Technol. (Soc. Ences) (2016) 10. Na, L., Hongxia, L.: University P. research on the maker culture in the “Internet+” age. Theor. Pract. Innovation Entrepreneurship (2018)
Computer Aided Design and Optimization of Adsorbent for Printing and Dyeing Wastewater Jia Lin(B) Henan Polytechnic University, Jiaozuo 454000, Henan, China
Abstract. Adsorption technology is widely used in the removal of dyes in printing and dyeing wastewater. At present, commercial activated carbon is widely used in the research of computer aided design of printing and dyeing wastewater adsorbent. Due to the high cost and difficult regeneration, its application and development are limited. Therefore, it is necessary to study new adsorbents with low cost to replace activated carbon. This paper briefly introduces the research of industrial waste, natural materials and biosorbent in CAD. Keywords: Printing and dyeing wastewater · Computer aided · Adsorbent · Dye
1 Introduction Textile industry wastewater is one of the sources of water pollution which poses a threat to the environment. The most serious pollution of printing and dyeing wastewater comes from the processes of desizing, scouring and bleaching, dyeing, printing and finishing. Due to the continuous renewal of treatment process and the increase of dye varieties, it is more difficult to treat printing and dyeing wastewater. In recent years, many literatures have reported the treatment of dye wastewater. The treatment technology can be divided into chemical method, biological method and physical method. Chemical method is generally coagulation or flocculation combined with filtration or flotation method, the treatment cost of this method is high, the sludge after treatment can not be effectively treated, excessive use of chemicals will also cause secondary environmental pollution. Biological methods include fungal decolorization, microbial degradation, etc., but due to the limitations of biological methods, they can not remove dyes very well, especially the degradation rate of azo dyes is very low. Physical methods such as membrane separation and adsorption are widely used. The main disadvantages of membrane separation technology are that the membrane is easy to be polluted, the service life is short, it needs to be replaced regularly, and the use cost is high. The adsorption method is low cost, flexible, easy to operate, and is not sensitive to toxic pollutants in the treatment process, so the treatment effect is better [1]. The adsorption method is more effective than other methods.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 468–473, 2022. https://doi.org/10.1007/978-981-16-5857-0_59
Computer Aided Design and Optimization of Adsorbent
469
Activated carbon is a common adsorbent, but the use cost is high, so we need to find a low-cost adsorbent with good adsorption effect to replace activated carbon. Some industrial and agricultural wastes, natural materials and biosorbents are potential economic substitutes, and most of them have been tested to remove dyes from wastewater.
2 Agricultural and Industrial Waste 2.1 Agricultural Solid Waste The agricultural forestry solid wastes (such as sawdust, bark, etc.) in forestry industry are easily available and have great potential as adsorbents. Sawdust is cheap and contains a variety of organic compounds, which combine with dye molecules through different mechanisms to achieve the purpose of dye removal. Recently, some people have studied the effect of sawdust on the removal of pollutants in water, and confirmed that sawdust can remove dyes from wastewater; the tannin content of bark is very high, and its polyhydroxy polyphenols are considered as active substances in the adsorption process. Morais et al. 10 studied the adsorption of dyes on eucalyptus bark. Under the conditions of pH = 2.5 and 18 °C, the adsorption capacity of 1G bark was 90mg. Under the same conditions, the experiments were carried out with commercial activated carbon and eucalyptus bark. The results showed that the adsorption capacity of eucalyptus bark was about 12 times that of activated carbon. Therefore, there is potential to use eucalyptus bark on an industrial scale. 2.2 Industrial By-Products Although it may contain some harmful substances such as heavy metals, it has been widely used in many countries. However, the sugar residue fly ash produced by sugar industry does not contain toxic metals and can be widely used in dye adsorption. Another abundant industrial by-product is red mud, which is bauxite processing residue discarded in alumina production. Namas IV ayam et al. 1 suggested that red mud be used as adsorbent to remove Congo red, and the maximum adsorption capacity was 405mg/g.
3 Natural Materials 3.1 Clay Minerals Natural clay minerals have low cost, abundant reserves, strong adsorption capacity and ion exchange potential, so they can be used as adsorbents. According to different hierarchies, clay minerals can be divided into montmorillonite, mica, kaolin, serpentine, Paishi, vermiculite and sepiolite. Their adsorption capacity is generated by the net negative charge of mineral structure, which can absorb positively charged materials. In addition, their adsorption characteristics also come from their high specific surface area and high porosity. In recent years, the interaction between dyes and clay minerals such as bentonite, kaolin and diatomite has been extensively studied, which can adsorb not only
470
J. Lin
inorganic molecules but also organic molecules. Clay minerals have strong affinity for heterocationic and anionic dyes; however, the adsorption capacity of clay minerals for basic dyes is much higher than that for acid dyes due to the ionic charge of dyes and clay properties. The adsorption of dyes on clay minerals mainly depends on ion exchange, which indicates that pH has a great influence on the adsorption capacity. A-shout et al. Found that dye adsorbed on Diatomite surface through physical adsorption (depending on particle size) and electrostatic interaction (depending on pH) [2]. It has been demonstrated that clay minerals have a good ability to remove dyes. Espantaleon et al. Reported a bentonite material with adsorption capacity of 360.5mg/g. Due to its high specific surface area, it has good removal effect for basic dyes, and is a good adsorbent. 3.2 Siliceous Material Some natural siliceous materials (such as silica spheres, alumina, perlite and dolomite) are increasingly used in the treatment of dyeing wastewater. In inorganic materials, due to the existence of silanol, the surface of silica spheres is hydrophilic. The porous structure, high specific surface area and mechanical stability also make it an adsorbent for purification application. However, due to the low resistance of siliceous materials to alkaline solution, it is only used under the condition of pH less than 8; in addition, the surface of siliceous materials contains acidic silanol, which will cause strong and irreversible non-specific adsorption, so it is necessary to eliminate these negative characteristics of adsorbents. In order to promote the interaction between siliceous materials and dyes, the surface of siliceous materials can be modified by silane coupling agent and amino compounds. The research of Phan et al. 16 shows that the modified silica spheres have better ability to remove acid dyes in colored wastewater. Alunite is a mineral in pyrite, which contains about 50% of silica. The unprocessed alunite has no adsorption. Ozawa et al. Used modified alunite to remove acid dyes in wastewater. The results showed that under the same experimental conditions, 1g commercial activated carbon could adsorb 57.47mg acid blue 40, while modified alunite had stronger adsorption capacity.
4 Development Status of Dyestuff Production In recent years, the output of dyestuffs in China ranks first in the world. About 10%–20% of dyes are discharged with wastewater in the process of dye production and processing, which has become one of the main pollution sources of water. Therefore, the treatment of dye wastewater is the focus of the chemical environmental protection industry in recent years, and the treatment of dye wastewater is urgent. Dye wastewater has the characteristics of high content of organic pollutants, drastic changes in water quality, various dyes and large changes in pH value. At present, physical adsorption, biological, chemical oxidation, flocculation and membrane separation are mostly used in the treatment of dye wastewater at home and abroad. The adsorption method has a special position in the field of wastewater treatment because it can selectively enrich some compounds. The commonly used adsorbents are activated carbon, molecular sieve, adsorption resin and some other adsorption materials. Adsorption method has the characteristics of good adsorption effect, simple operation and wide application range, which has been widely
Computer Aided Design and Optimization of Adsorbent
471
used in the practical work of dye wastewater pollution treatment. The author focuses on the comprehensive description of the technology of dye wastewater treatment by adsorption method.
5 Main Adsorption Methods Used 5.1 Activated Carbon Adsorption Activated carbon adsorption method is generally made of charcoal and other materials through carbonization and activation at high temperature. It has highly developed pore structure on the surface and inside, and has a large specific surface area (up to 3000 m 2/g), which makes activated carbon have great use value as adsorbent and catalyst carrier. Generally, physical adsorption and chemical adsorption are combined. In recent years, scholars at home and abroad have done a lot of research on the treatment of dye wastewater by activated carbon adsorption. Using triblock copolymer f108 as template and phenol/formaldehyde as carbon source, Xu enbing prepared mesoporous carbon with a maximum adsorption capacity of 42l mg/g for methylene blue. Compared with conventional activated carbon, mesoporous carbon has better adsorption capacity for methylene blue. Zhang Jinfeng et al. Used phosphoric acid to treat microwave irradiation to prepare peanut shell activated carbon, and used jiehu purple solution as simulated dye wastewater. The experimental results showed that peanut shell activated carbon was a cheap adsorbent with high removal rate, and the removal rate of crystal violet was% 6%. Danxuanxuan et al. Used Xinjiang thin walnut shell as raw material to prepare activated carbon, and used microwave, light wave, light wave combination Cl, light wave combination C2 to assist activated carbon to adsorb malachite green dye solution. it = σ (Wi · Xt + Ui · ht−1 + bi )
(1)
at = tanh(Wa · Xt + Ua · ht−1 + ba ) it = Sigmoid Conv(x) + Conv ht−1
(2) (3)
5.2 Resin Adsorption Method The porous polymer resin developed in recent years has the advantages of high adsorption efficiency, easy regeneration and stable performance. It can be made into stationary phase packed column for continuous treatment of dye wastewater. It is easy to operate. Yu Xianglin et al. The adsorption effect of acrylic super absorbent resin on Malachite Green, methylene blue and neutral red cationic dyes was studied. The composition of resin, adsorption time and P value were investigated The effects of H value, resin amount and dye concentration on the adsorption of cationic dyes were studied [3]. The experimental results show that the super absorbent resin containing carboxyl group and sulfonic group has good adsorption effect on the three cationic dyes, and the adsorption rate is more than 9%. Reis et al. Used sodium dodecyl sulfate (SDS) to modify cationic polymer resin to treat methyl green and malachite green in dye wastewater.
472
J. Lin
6 Architecture of CoopCAD 6.1 The Basic Model of CSW System For CSCW, we can think as follows: in the computer supported environment (CS), a group works together to complete a common task w) so “common task” and computer supported “common environment” are the most critical contents of CSCW concept. The so-called “common task” is the task that the cooperators work together to complete. Specifically, in the traditional time-sharing system, the “common task” is the task that the cooperators work together to complete, Multi user concurrent execution of relatively separate and independent tasks is not aimed at the collaborative work of common tasks, but just like collaborative design and co editing system, which is a kind of multi-user cooperative system, it enables the group collaboration members to operate on an entity together. Therefore, the system must ensure that all relevant personnel can cooperate and communicate closely, Working together to accomplish the same task. The so-called “common environment” refers to a shared environment in which the cooperators live. This environment transmits all kinds of on-site information to all (or a group of) participants, so that they can know all kinds of situations of others in time, so that they can work together. Therefore, it is at the high end of the shared environment dimension in the scope of CSCW. The basic model of realizing CSCW system is shown in Fig. 1.
Fig. 1. Basic model of CSCW system
Computer Aided Design and Optimization of Adsorbent
473
6.2 Working Model of CoopCAD System CoopCAD system is a collaborative computer-aided system based on replication application sharing technology. CoopCAD system can be divided into network information service and collaborative work service. Among them, information service provides information query and resource sharing service for Internet users; Collaborative work service can be divided into two parts: inter group and intra group. The information interaction between groups is mainly through asynchronous means (e-mail system, www home page). Therefore, CoopCAD system combines the document MIS and CSCW technology based on the design scheme, and takes the corresponding network information server (including database server, FTP server, web server, email server) and collaborative work server (collaborative management server and collaborative server) as the center [4]. To provide services for users, CoopCAD should first provide a rich means of humancomputer interaction multimedia user interface (mainly including real-time video and audio, text-based message transmission and other services) in order to enable the participants to fully communicate with each other, Provide sufficient information exchange and sharing support (mainly including network transmission, notification information filtering, access control, concurrency control, etc.); finally, we need to provide a multi-level, multi group collaborative design mode to meet the needs of the actual design work.
7 Concluding Remarks This paper discusses the principle and key technologies of collaborative CAD system. Collaborative design is a knowledge-based computing process, which not only needs the knowledge of different fields and the experience of experts, What’s more important is to have an effective mechanism to integrate the design activities of different experts. CoopCAD system is a basic platform for collaborative CAD development. In order to design better, more agent modules need to be added to CoopCAD, For example, the structural analysis, stress calculation, strength calibration and so on in the design process, which is also the goal of CoopCAD in the next step.
References 1. Lu, Y., Wang, L., Deng, J., Yu, Z., Wang, M.: Surface carboxylated mesoporous TiO 2_ 2. Efficient treatment of low concentration yanglihong GTL printing and dyeing wastewater. Green Technol. (24), 65–67 (2020) 2. Zhi, S.: Preparation and application of organic modified diatomite adsorbent. Sichuan Normal Univ. (2020) 3. Yanying, W., Xinxin, L., Jinli, W., Haidong, W.: Research progress of dyeing wastewater treatment with modified activated carbon. Dyes Dyeing 57(04), 58–61 (2020) 4. Xuemin, D., Ying, L., Guoying, N.: Research progress on adsorption effect of modified agricultural and forestry wastes on printing and dyeing wastewater. Dyeing Finish. Technol. 42(08), 1–4 (2020)
Research on the Development Path of Digital Inclusive Finance Based on Convolutional Neural Network WenHua Li(B) Shandong Xiehe University, Jinan 250109, China
Abstract. Based on the convolutional neural network financial development path, “digital Inclusive Finance” under the background of Internet can supplement the defects of traditional financial system, supplement the “short board” of traditional finance, reduce the cost of financial services, and better promote the implementation of Inclusive Finance. Keywords: Convolutional network · Digital Inclusive Finance · Mobile payment · e-commerce platform · P2P network lending
1 Convolution Neural Network 1.1 The Concept of Convolution Neural Network CNN convolution neural network model is inspired by biological natural visual cognitive mechanism, and is widely used in image recognition, classification and other fields. Its advantage is that it can accurately extract the local correlation of data features, so as to improve the accuracy of feature extraction. It is an excellent deep learning classification algorithm [1]. It is also designed to process high-dimensional data and classify highdimensional data. It has become one of the research hotspots in many scientific fields. 1.2 Main Components As a deep learning model, CNN can realize supervised learning through multilayer back propagation neural network. Its structure usually includes input layer, convolutional layer, relu layer, pooling layer, fully connected network and output layer. Convolution layer, pooling layer and fully connected layer are called hidden layer. As shown in Fig. 1. Convolution layer can reduce the memory occupied by deep network. In a convolution layer of CNN, the input eigenvector of the upper layer is convoluted with the convolution kernel to get the output vector. The connection between convolution layer and pooling layer is local connection, that is, neurons in each layer only connect several nodes close to it in input layer. The full connection layer is usually the traditional BPNN neural network, which adopts the full connection mode. After the calculation of the full © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 474–479, 2022. https://doi.org/10.1007/978-981-16-5857-0_60
Research on the Development Path of Digital Inclusive Finance
475
Fig. 1. CNN convolution neural network structure
connection layer, the results are transferred to the output layer. The output layer connects the classifiers and uses the logic function or the normalized exponential function softmax to calculate and output the final classification results. In the forward propagation stage, the eigenvector of the j-th input feature of the L layer of CNN convolution layer is set as Xji Xji = f (
X l−1 i∈Mj j
∗ Kijl + blj )
(1)
2 Development and Application of Digital Technology in Inclusive Finance 2.1 Mobile Payment Technology Helps to form Inclusive Financial Payment Settlement System Infrastructure construction is the foundation of inclusive financial development, and also the most intuitive performance of inclusive financial development. Payment and settlement system is an important part of financial infrastructure. In recent years, on the whole, the scale of China’s financial infrastructure system has been expanding, the number has gradually increased, and the constituent elements have also tended to be diversified [2]. However, the increasing availability of traditional financial services has not completely solved the problems of high cost and unsustainability in the field of inclusive financial services. The financial products provided by some remote areas are limited to traditional financial services such as deposit, loan and foreign exchange. The convenience, diversity and price rationality of financial payment and settlement system can not meet the financial needs of some regions. The wide application of digital technology represented by mobile payment can effectively solve the above problems and help the formation of inclusive financial payment and settlement system in China. Mobile payment provides basic payment and settlement services, mainly involving bank users, third-party payment platforms and communication operators.
476
W. Li
2.2 E-commerce Platforms have been Deployed in the Field of "Agriculture, Rural Areas and Farmers" to Further Optimize the Inclusive Financial Service Model Many large-scale e-commerce platforms use their own cash flow, consumption flow and information flow to organically integrate online and offline two-way resources, and solve a package of financial service problems in the field of “agriculture, rural areas and farmers”, such as transfer payment, commodity sales, micro loans, financial investment, living expenses, etc., in the mode of “industrial chain rural finance of e-commerce platform”, The service mode of Inclusive Finance has been further optimized. In the mode of “e-commerce platform whole industry chain rural finance”, e-commerce platform is the core point of the operation of the whole rural e-commerce financial platform; big data is the cornerstone of the operation of rural e-commerce financial platform. The e-commerce platform provides financial services for customers by integrating online resources. It constructs big data through historical data of e-commerce transactions and other external data collected by rural service points, and then conducts data analysis through Internet technologies such as cloud computing, so as to conduct credit evaluation and risk control for customers, and then make use of the above data information, The e-commerce platform provides credit loans for customers trading on the platform, or the e-commerce platform seeks cooperation with rural financial institutions, and the financial institutions provide loans, and the e-commerce platform provides other financial services.
3 P2P Network Lending Realizes the Win-Win Situation of Inclusive Financial Service Object From the capital side, P2P network lending has broadened the investment channels for people with low income, and brought investment income. With the gradual awakening of national investment consciousness, risk income awareness and credit awareness, China is entering a new period of asset explosion and wealth appreciation. From highly dependent on bank savings to the capital market, national financial investment, entrepreneurship and wealth management will gradually become the mainstream trend of the current financial market. However, most of the traditional financial institutions have set a certain threshold for customers, and people below the income average line are excluded from the financial market. The emergence of P2P network lending further broadens the investment channels of residents, and the aristocratic attribute of financial management business no longer exists. From the asset side, P2P network lending plays an irreplaceable role in the financing of small and micro enterprises, the key service object of inclusive financial services [3]. P2P online lending industry makes full use of the advantages of the Internet and information, and provides efficient and convenient loan services for borrowers with its characteristics of “small, fast and flexible”. According to the industry monthly report released by the online lending house, the P2P online lending industry achieved an overall turnover of 171.371 billion yuan in June 2016, up to the end of June 2016, The cumulative trading volume of P2P online lending industry has reached 220750.6 billion yuan.
Research on the Development Path of Digital Inclusive Finance
477
4 Coupling Innovation of Digital Technology and Inclusive Finance 4.1 Digital Technology Reduces the Cost of Inclusive Finance The extensive application of digital technology in Inclusive Finance has weakened the role of financial intermediary, and the phenomenon of financial disintermediation tends to be daily. Taking P2P network lending as an example, the demanders and suppliers of funds exchange information through the network, and use the lending platform and payment system to complete the matching pricing and trading of funds, without the intermediary of banks, securities companies or exchanges. Digital technology, represented by mobile payment, social network and search engine, is widely used, which greatly reduces the cost of information collection, and gradually forms a “full transaction possibility set” between capital supply and demand. In addition, similar to P2P network lending, the “We Media” mode of financial services does not need to be carried out in physical places, which avoids the manufacturing costs and staff salaries of business outlets, and further reduces the operating costs of Inclusive Finance. 4.2 Digital Technology Expands the Service Boundary of Inclusive Finance The wide application of digital technology makes it convenient, efficient and timely to become the embedded label of Internet finance, and the service boundary of Inclusive Finance is further expanded. When the deep integration of Internet technology and financial services effectively increases the supply of financial services, the supply curve shifts to the right, the price of financial services declines and the radius of financial services expands. With the in-depth application of Internet technology, it will promote the innovation of inclusive financial products and services [4]. On the basis of traditional deposit, loan and foreign exchange business, more financial products and services will be derived to meet the personalized and diversified financial service needs of inclusive financial customers. The talents needed by internet financial enterprises are generally divided into four categories: “typical” traditional financial talents, financial product R & D talents, Internet technology talents and Internet operation and promotion talents. At present, the three categories of talents urgently needed in the industry are technical personnel, financial personnel and operation personnel. If you are a technology party, you’d better understand PC R & D, mobile R & D, product R & D, etc.; If you are a financial school, you’d better understand financial product design, financial modeling, risk control, and how to conduct big data analysis. Of course, for some platforms to carry out offline business, there is an urgent shortage of local contacts and experienced account managers. If you are an operating dog, you should not only know how to follow the trend of hot spots, but also know something about finance. You should also be familiar with the way of Internet communication. People like Lei Jun and diaoye, who can play with concepts and package, can successfully attract people’s attention.
478
W. Li
4.3 Digital Technology Improves the Capacity of Inclusive Finance Inclusive Finance capability means that financial institutions oriented by Inclusive Finance minimize their risk exposure when maximizing profits. It is an important guarantee for Inclusive Finance to achieve both commercial and policy objectives. First, digital technology can maximize profits by reducing costs. Information storage technology can store massive customer data; data mining and processing technology can filter out effective information through the structure and standardization of massive data to improve the accuracy of data analysis. Second, digital technology can improve the risk control ability of inclusive financial institutions. With the application of Internet technology in people’s life, more and more traces of customers are left on the Internet. Although these information can not directly reflect a person’s credit attribute, the analysis of information mining and processing technology can play a good auxiliary role in the loan decision-making of financial institutions.
5 Conclusion In essence, the financial risk early warning algorithm based on CNN deep learning is to design an intelligent classifier with high accuracy, which can identify the level of financial risk by training the classifier, so as to realize financial risk early warning. In this method, the data set is integrated according to the risk index of financial system, and the data set is preprocessed to enable it to perform the corresponding experimental operation. Combined with CNN convolution neural network, the financial system risk early warning model is established. The experimental results show that this method has higher classification accuracy and lower false alarm rate; compared with other neural network models, it has better classification performance and higher accuracy of financial system risk early warning. Trust and securities have a relatively high threshold, and generally prefer experienced job seekers. However, some companies open a certain number of recruitment places for new graduates every year. For example, Zhongrong trust recruited 140 new graduates last year, and the positions are concentrated in the trust manager assistant of the front desk business department; In the background of risk control, legal, product support. Generally, the trust staff should start from assistant, then be promoted according to the path of trust manager and senior trust manager, and then be in charge of the business. In recent years, the management of trust companies by the regulatory authorities has become more and more standardized. Before that, the situation that trust companies could be promoted after working for half a year to one year will be gradually controlled. After the standardization, the promotion speed of each level will be about two to three years. Securities are similar. Acknowledgements. 2019 Jinan City Philosophy and Social Science Planning Project:Research on the Development Path of Digital Inclusive Finance in Jinan City, Item number:JNSK19C64.
Research on the Development Path of Digital Inclusive Finance
479
References 1. Yin, H.: Research on the development status and path of digital inclusive finance in Guangdong province. Econ. Manage. Digest, (21), 1–2 + 4 (2020) 2. Wei, L.: Anchoring rural revitalization and optimizing the development path of digital inclusive finance. China Rural Finance 16, 52 (2020) 3. Yanqing, G.: Ecological framework and implementation path of rural digital Inclusive finance development. Financ. Theor. Pract. 03, 32–39 (2020) 4. Haijing, W., Xiaoping, Z.: Innovation path of digital inclusive finance development. Nat. Circ. Econ. 06, 162–163 (2020)
Construction and Application of Virtual Simulation Platform for Medical Education Based on Big Data Xiafukaiti Alifu(B) , Nuerbiya Wusuyin, and Maimaiti Yasen School of Public Health, Xinjiang Medical University, Urumqi 830011, Xinjiang Uygur Autonomous Region, China
Abstract. Using the experience of flipped classroom practitioners at home and abroad for reference and making full use of modern information means, a medical virtual simulation experiment platform was constructed. Reasonable teaching design and application of flipped classroom in medical experiment teaching solve the learning requirements of students at different levels, enhance students’ autonomous learning ability, exercise students’ independent thinking and innovative thinking ability, and realize personalized learning. The platform not only provides a new method for basic medical experiment teaching, which makes it feasible to introduce flipped classroom into basic medical experiment teaching, but also promotes the implementation of information education, and improves the teaching quality of medical students’ experiment teaching and professional subjects. Keywords: Virtual reality · Clinical skills · Simulation training
1 Introduction Therefore, we urgently need to develop a scalable, efficient and high-quality medical and health professional education as the focus of the strategic program. Fully trained health care workers are essential to ensure high quality health care and achieve universal health coverage. With the continuous progress and wide application of digital technology, people think that digital technology is the hope of efficient medical education and training [1]. VR technology was born in the 1960s, which is a comprehensive application of sensor, human-computer interaction, computer graphics, multimedia, network, artificial intelligence and other technologies. It includes helmet mounted display (HMD), head tracking system, headphones and operation/navigation equipment, providing a multi sensory, three-dimensional (3D) environment, allowing users to completely immerse in a virtual world. Recently, low-cost virtual reality (VR) technologies, such as oculus RIF.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 480–485, 2022. https://doi.org/10.1007/978-981-16-5857-0_61
Construction and Application of Virtual Simulation Platform
481
2 Implementation of Training Evaluation Module Based on Decision Tree This section describes the specific implementation process of training evaluation module, from the initial model construction, to the association between each evaluation index, and the specific process of data processing, through the design optimization of different weights, the optimal case detection evaluation decision tree model is obtained. 2.1 Decision Tree Machine learning algorithms. It is a classification prediction model based on tree structure, which has high practicability in classification algorithm [2]. The core idea of decision tree algorithm is greedy algorithm. By making the best choice at present, the training data is processed by recursive experimental detection, and then the corresponding rule decision tree is generated. Then the generated decision tree model is used to process the preprocessed data, and the classification evaluation is carried out according to the rules generated by the decision tree model. As shown in the following Fig. 1, it is a decision tree, where a, B, C and D are attribute names and a is the root node; Al, A2, A3, BL, B2, C1, D1, D2 and D3 represent the values of attributes a, B, C and D respectively. When attribute a takes Al and attribute B takes BL, it belongs to class 1; when attribute a takes A3 and attribute d takes D3, it also belongs to class 1.
Fig. 1. General structure of decision tree
2.2 Establishment of Decision Tree Model The rules of constructing decision tree are obtained by training the data of the data set, and the decision tree models obtained by different training data samples are also different, so the root nodes and internal nodes selected are different, resulting in great differences in the formation of decision tree. Only the attribute with the largest information gain rate is selected as the root node of constructing decision tree model, so as to iterate recursion, Complete the construction of decision tree. The calculation formula of information entropy after dividing data samples by attribute a is as follows: v sij + · · · + smj I (sij , · · · smj ) (1) E(A) = i=1 s
482
X. Alifu et al.
When attribute a divides the sample set of the current node, the information entropy is as follows: m I (s1 , s2 , · · · , sm ) = −(pi log pi ) (2) i=1
The process of building decision tree model is the process of producing decision tree. Firstly, we need to analyze the training data set, get the information gain rate of each attribute according to the above calculation formula, select the one with the largest information gain rate as the root node, and then analyze the remaining data set to find the next node by recursion until the construction of decision tree is completed.
3 Components of Virtual Simulation Teaching Platform for Clinical Skills 3.1 Intelligent Simulator and Simulation Equipment Laboratory In recent years, the national clinical skills experimental teaching demonstration center has invested a lot of resources in the purchase of equipment and software construction. By purchasing and independently developing advanced virtual simulation teaching instruments, optimizing teaching contents, reforming teaching methods and assessment modes, we have established an intelligent simulator and simulation equipment laboratory covering 19 clinical courses and 104 experimental projects, which is open to medical undergraduates, postgraduates, international students, residents and research physicians in Northwest China all day long, The equipment intact rate is 98%, and the opening rate is 95%. The laboratory meets the needs of multi-disciplinary clinical experiment teaching and plays an important role in the training of medical talents [3]. For example, SimMan advanced intelligent human simulator is a super comprehensive simulation system driven by advanced computer technology, which can simulate the real pathological and physiological characteristics of human body and the cases and treatment scenes often encountered in clinical practice, with high simulation effect. The simulated human has the physical signs such as blinking, speaking, breathing, heartbeat and pulse, and the pathological, physiological and physiological functions such as cardiovascular system, respiratory system, nervous system and urogenital system. It can simulate various symptoms, signs and reactions to various operations of the real human body in real time, creating a full-functional clinical simulation teaching environment. 3.2 Virtual Simulation Surgery and Endoscopy Teaching Platform Surgery and endoscopy teaching has always been a difficult point in clinical medical experiment teaching. Students’ slow mastery and long training cycle increase the risk and pain of patients. Virtual simulation surgery and endoscopy teaching platform can effectively solve this problem and improve the quality and efficiency of training, including Da Vinci robot simulation surgery system, virtual operating table operating system, virtual laparoscopic surgery training system, virtual vascular intervention surgery training system, virtual simulation electronic endoscopy teaching system, etc. [4].
Construction and Application of Virtual Simulation Platform
483
Da Vinci robot simulation surgery system is a kind of simulation surgery training equipment specially designed for Da Vinci robot surgery. It is a robot surgery training system with the main features of three-dimensional virtual reality and standard surgical procedures. The system provides a set of progressive curriculum design from easy to difficult, including grasping and replacing the surgical needle, removing the surgical needle, electrocautery, tissue cutting, tissue retraction, blunt tissue anatomy, vascular cutting, knotting and advanced surgical specific operation module, etc., in order to cultivate the skilled skills and cognitive skills of surgical robot in practical operation. The simulation system can reduce the cost of training with Da Vinci robot system, without consuming any equipment and materials, without using the operating room, and can train repeatedly, so that doctors can pass the learning cycle of robot surgery as soon as possible, and improve the safety factor of robot surgery and the cooperation ability of operation team. The operating system of virtual operating table simulates the scene of surgeons’ operation, including virtual operating table and operating lights, virtual surgical tools (such as scalpels, syringes, surgical forceps, etc.), virtual human models and organs, etc. users can operate on the virtual human models. In the computer-generated virtual surgery environment, we can experience and learn how to perform various surgeries, and cultivate the ability to cope with emergencies. The virtual laparoscopic surgery training system is also a virtual reality training system, which can simulate the basic skills of laparoscopic surgery, such as object transport, cutting, electrocoagulation, eye hand coordination, two hand cooperation, etc., It can also be used for the training of laparoscopic cholecystectomy and other minimally invasive surgery, which has significant training effect and value for mastering the basic laparoscopic operation techniques.
4 Construction of Three Big Data Platforms Inspired by the construction of big data platform in other colleges and universities, and combined with the actual situation of education informatization in our school, this paper constructs an education big data platform from four parts of data platform, data warehouse, data analysis and algorithm recommendation, which is in line with the personnel structure, management mode and Application practice of our school. 4.1 Data Platform Java technology is used to ETL the data collected from various data sources, and the processed data is stored in Hadoop distributed file system (HDFS). Part of the data that needs to be displayed in the report can be stored in mongodb. As shown in Fig. 2. Python can be used to query and display the report page by page quickly. Storm can also be used to process the data stream quickly. 4.2 Data Warehouse After storing the data in HDFS, we need to use hive Postgres and other clients to re sub base and sub table the data according to the business requirements of data analysis,
484
X. Alifu et al.
Fig. 2. Fast processing data flow
so as to meet the personalized needs of users. Data warehouse integrates the original scattered data into a new data source according to different topics, and generates a new data interface for platform and business system to call. For example: after entering the data warehouse, the business system data of each department (Department) of a university can integrate data subsets such as school management, student management, teaching management, staff management, scientific research management, asset and equipment management, office management and financial management according to different topics. Students’ scholarship evaluation, teachers’ performance evaluation and year-end evaluation of departments can call these data subsets.
5 Conclusion Important measure of clinical medicine teaching reform. It can be used not only for the daily teaching and assessment of clinical medicine, but also for the training and simulation exercise of “National College Students’ clinical skills competition”. It is a beneficial exploration of the application of virtual simulation technology in clinical experiment teaching.
References 1. Hu, X., Pan, K., Zhang, X., et al.: Promote the practice and innovation ability of medical students relying on virtual simulation experiment teaching platform. China Mod. Educ. Equip. (1), 6–8 (2015) 2. Zhang, Y., Tai, W., Shang, B., et al.: Application status and analysis of virtual anatomy laboratory in medical colleges and universities. Anat. Res. 33(4), 310–313 (2011)
Construction and Application of Virtual Simulation Platform
485
3. Chen, H., Shi, S., Wang, J.: Implementation of Ideological and political education for medical students. Health News (006), 18 June 2021 4. Wang, Y., Wang, T., Li, X., Jiang, S.: The impact of data security and privacy protection on teaching and learning in the intelligent age: Interpretation and mirror of “information security” and “teaching and learning” editions of horizon report 2021
Deep Learning Method for Human Emotion Detection and Text Analysis Based on Big Data Shu-yue Zhang(B) Guangzhou College of Technology and Business, Guangzhou, Guangdong, China
Abstract. Affective analysis method is constantly exploring from shallow learning to deep learning. In deep learning, due to the deepening of learning layers of recurrent neural network (RNN), it will lead to the problem of gradient dispersion. Therefore, an ISTM network model is proposed to solve the problem. This paper proposes a neural network architecture model, multigru, which is constructed by using an important variant of LSTM, Gru (gated recurrent unit). Multigru uses Gru for multi-layer stacking to reduce information loss. The experimental results show that the performance of this model is better than that of LSTM and other models. Keywords: Affective analysis · Deep learning · LSTM · Multigru
1 Introduction Personalization is the need to add unique and own characteristics on the basis of popularization. It emphasizes the service and demand with individual interest characteristics. Personalized recommendation system is proposed to solve these problems and has been developed rapidly. A complete recommendation system consists of user behavior recording module, user preference analysis module and recommendation algorithm module [1]. Now is an era of information explosion, how to use these massive and complex data has gradually become a hot topic of current research. With the in-depth research and the improvement of computer hardware equipment in recent years, the model based on neural network is more and more popular, and the processing ability of deep learning in artificial intelligence has also been initially improved, which has attracted more and more attention. Deep learning is a branch of machine learning. The concept of deep learning was first put forward by G.E. Hinton of the University of Toronto in 2006. It refers to the process of machine learning based on sample data to obtain multi-level deep network structure through certain training methods.
2 A Review of Text Sentiment Analysis 2.1 An Overview of Affective Analysis Sentiment analysis, also known as sentiment orientation judgment, refers to the analysis and induction of the comment text, inferring the subjective information with emotional © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 486–490, 2022. https://doi.org/10.1007/978-981-16-5857-0_62
Deep Learning Method for Human Emotion Detection and Text Analysis
487
color in the text, and then judging whether the emotion in the information is positive or negative, such as whether people approve or oppose a hot event. Emotion analysis technology is widely used in business, public opinion analysis and government decisionmaking [2]. According to the size of the text, the research of sentiment analysis is generally divided into three levels: phrase level, sentence level and text level. The research methods can be roughly divided into three categories: the first category is based on emotion dictionary, which calculates the score of comments according to emotion dictionary, negative dictionary and attribute dictionary, and judges the emotional tendency according to the final score; the second category is based on syntactic rules, which applies dependency parsing and syntactic structure relationship on the basis of the first category, The text to be classified is matched with the patterns in the grammar structure pattern library to judge the emotional polarity of the comments. The third method is based on machine learning, dealing with emotional analysis with the idea of text classification, preprocessing the data set, extracting features according to the preset rules and training the classifier, etc., and finally obtaining the emotional polarity of the test set. 2.2 Introduction of Machine Learning Model The method based on machine learning can be regarded as a three classification problem. This paper studies two classification, that is, positive and negative binary classification. It generally uses maximum entropy, KNN, naive Bayes, conditional random field and support vector machine to construct classifiers to judge the emotional polarity of text. This paper uses three text classifiers: support vector machine, naive Bayes and simple Rogers regression. The best classification line is not only the one that can correctly distinguish two different types of data, but also the one that maximizes the classification interval. The equation for calculating the classification line is shown in Formula 1: w·x+b=0
(1)
When Formula 1 is satisfied, formula 2 is also satisfied as follows: yi (w · xi + b) − 1 ≥ 0, i = 1, · · · , n
(2)
The linear classification problem after nonlinear transformation is solved. The classification calculation function is shown in Formula 3: n f (x) = sgn( αi∗ yi K(xi , X ) + b∗ ) (3) i=1
According to different inner product kernel functions, support vector machines implement different algorithms. In the existing research, the following is shown. The definition of polynomial kernel function is shown in formula 4: K(x · xi ) = [(x · xi ) + 1]q Where, q-integer, its value range is [1, N].
(4)
488
S. Zhang
3 Emotion Classification Based on SVM and Complex Sentence Patterns 3.1 Raising of the Problem According to the size of text granularity, sentiment analysis can be divided into text level, sentence level and word level. Sina Weibo is a popular social media and information release media [3]. It is convenient to write anytime and anywhere, and more and more people like it. Sina’s latest data shows that the number of registered users of Weibo has exceeded 500 million, with monthly active users approaching 200 million and daily active users as high as 100 million. Microblog is constantly infiltrating the life of the majority of Internet users. Brushing microblog has become a living habit. Because of its huge number of users, the speed of information dissemination is extremely fast, and it plays a huge influence on the dissemination of public opinion, commercial activities and other aspects, so it is necessary to study it. 3.2 Related Work There are two approaches to affective analysis: one is based on affective dictionary, the other is based on machine learning. Based on the method of emotion dictionary and rules, this idea needs to manually collect emotional words, divide them into positive and negative evaluation words, and finally build an emotion dictionary. Chen Xiaodong takes the emotional words in microblog as features, and calculates the emotional value of the whole microblog to get the emotional tendency. This method based on rules and emotion dictionary has great limitations. Firstly, it is difficult to collect a complete emotional dictionary; secondly, new popular words are emerging in the Internet; in addition, some words have the problem of ambiguity, showing opposite emotional polarity in different contexts or fields, so it is impossible to simply give them emotional polarity. Machine learning method has been widely used because of its excellent performance. How to extract complex features instead of simple features and how to calculate which type of features are more valuable are two key problems of machine learning method. So far, a large number of feature extraction methods have been proposed, including lexical syntactic model and many other new models,. However, semantic features are seldom considered in sentiment analysis. In fact, syntactic rules and syntactic structural features can explain profound and implicit semantic relations, which is very helpful for sentiment analysis. Therefore, sentiment analysis with complex sentence features can get better results. 3.3 Construction of Emotional Resources The quality of data set is directly related to the performance of emotion classification, so it is necessary to preprocess the data set. The website address, topic and user in the comments do not contain the user’s views, and they are likely to be the noise of word segmentation and part of speech tagging, which affects the classification effect. Therefore, the above information in the comments is filtered out before word segmentation. Use stuttering word segmentation tool to segment comments and part of speech tagging,
Deep Learning Method for Human Emotion Detection and Text Analysis
489
and finally stop words. Stop dictionary selects stop Dictionary of Harbin Institute of technology to deal with it and remove the conditional words, turning conjunctions and so on. This paper puts forward the feature extraction rules of conditional sentence pattern and turning sentence pattern [4]. (1) Part of speech features, defined as the number of nouns, verbs, adjectives and adverbs. (2) Negative word features, count the number of negative words before emotional words, and set the window to 10. If the total number of times is more than 2, if the value is 1, the polarity of emotional words is reversed; if the value is 0, the polarity of emotional words is unchanged. (3) The characteristics of Degree Adverbs: when the degree adverbs appear in front of the emotional words, the emotional intensity of the comments changes accordingly, and the intensity change coefficient. (4) Conditional sentence features, when there are conditional words in front of emotional words, the characteristic value of conditional sentence. When there is only the first kind of inflection conjunction in front of the emotional word, the inflection characteristic value is taken as 1, otherwise it is taken as 0. Calculate the number of inflection conjunctions and judge the types of inflection conjunctions: if there is only the first kind of inflection within the scope of window 10 in front of emotion words, the polarity of emotion words is reversed; if there is only the second kind of inflection within the scope of window 10 in front of emotion words, the polarity of emotion words remains unchanged; if there are two kinds of inflection words within the scope of window 10 in front of emotion words, the polarity of emotion words is reversed, The polarity of emotional words remains unchanged.
4 Deep Learning Deep learning is a kind of machine learning algorithm, mainly multilayer neural network. It has the following structural features: (1) Hierarchy The first layer is the input layer, the last layer is the output layer, and all the middle layers are collectively referred to as the hidden layer. This hierarchical structure enables the neural network to extract features based on the features extracted from the previous layer. Through multi-layer feature extraction, the neural network can learn high-level abstract features. (2) Multineuron Multi neuron in input layer enables neural network to allow multiple input variables, one neuron in input layer corresponds to one input variable, multi neuron in output layer enables neural network to allow multiple predicted values, and one neuron in output layer corresponds to one output variable, It can be widely applied to various problems. Multiple neurons in the hidden layer make it possible for the neural
490
S. Zhang
network to learn multi-dimensional features, which further improves the feature expression ability of the neural network model. (3) Combination of linear operation and nonlinear operation Each neuron in L + 1 layer is fully connected with L. layer. Each neuron in L + 1 layer and all neurons in the front layer constitute a perceptron structure. Its neuron value is the weighted sum of all neurons in L layer, and then through a nonlinear Che Huo function. This combination of linear operation and nonlinear operation makes the neural network have strong expression ability.
5 Conclusion Sentiment classification is the basic research of text sentiment analysis technology, which plays a great role in the upper application of sentiment analysis. This paper makes a deep research on the method of emotion classification based on machine learning: on the one hand, it starts from the syntax rules, fully analyzes the complex sentence patterns, puts forward the corresponding rules, and transforms the conditional sentence patterns and transitional sentence patterns into the features of machine learning for training; on the other hand, it uses word2vec, a tool of deep learning, to train the word vector, combining the word vector and statistical features, Improve the accuracy of emotion classification. Acknowledgements. Young innovative talents project in Guangdong Province in 2020 (No. 2020KQNCX109).
References 1. Zhu, Y., Min, J., Zhou, Y., et al.: Calculation of lexical semantic tendency based on HowNet. Acta Sinica Sinica Sinica 20(1), 14–20 (2006) 2. Taboada, M., Brooke, J., Tofiloski, M., et al.: Lexicon-based methods for sentiment analysis. Comput. Linguist. 37(2), 267–307 (2011) 3. Zhou, Y., Yang, A., Lin, J.: Construction method of Chinese microblog emotion DICTIONARY. J. Shandong Univ. (Eng. Ed.) 44(3), 36–40 (2014) 4. You, J.: Research on microblog sentiment orientation based on semantic sentiment space model. Jinan University, Guangzhou (2012)
Research on the Whole Process Management System Design of Big Data Construction Project Cost Based on Cognitive Inspiration Li Wang(B) TaiShan University, Tai’an 271000, China
Abstract. In the process of social development, with the rapid development and progress of information technology, it has a direct impact on all fields. In the process of carrying out the project, we can make good use of the information technology to control the cost reasonably and improve the rationality of the project. In the analysis of this paper, based on the current big data environment, the design of the whole process management system of construction cost is analyzed in detail. Keywords: Big data · Construction engineering · Whole process management system design · Information platform
1 Introduction In the process of engineering project development, the application of big data can provide very comprehensive information data for project cost, so it can carry out scientific and reasonable cost work well. For example, based on big data information, we can collect some information data related to project cost, and provide more comprehensive and specific information content for its cost work.
2 Project Cost Information Management Platform In the process of project cost, we can make good use of the technical method of big data to construct the corresponding information platform, and with the progress of the project, make the information platform perfect gradually [1]. In this way, the whole process information data of project cost can be collected and processed comprehensively. The construction of this platform mainly uses computer technology effectively to analyze and process the engineering cost information. Therefore, in the process of building its system, based on the principles of expansibility, compatibility and high performance, it is necessary to ensure that the platform has high reliability and can deal with various engineering projects. Secondly, it is also necessary to analyze and deal with the cost consultation, project cost compilation, budget estimate and so on in the construction process of the platform, so as to ensure that the corresponding function design can be done well in the actual calculation and analysis process. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 491–495, 2022. https://doi.org/10.1007/978-981-16-5857-0_63
492
L. Wang
At the same time, in the process of project cost, it is often the important economic and technical index of the whole project in terms of investment and cost index, and it becomes the key in the process of project cost analysis. Therefore, it is necessary to build and process the platform of engineering cost information. And actively based on the project information dynamics, engineering industry and cost and other aspects, the corresponding reasonable analysis and estimation, so as to effectively achieve cost control.
3 Function of Project Cost Information Management Platform 3.1 Cost Information Data Collection In the current process of collecting the cost information of its project, it is mainly based on the actual situation of the project, and carries on the statistics and analysis of the basic data of its cost, so that its content can be well stipulated [2]. For example, cost quotas, mechanical information, and material supplier information can be determined. In addition, in the process of cost analysis, it is necessary to ensure that the collected information content includes dynamic information, engineering cost consultation and employee information content, so as to fully ensure that in the process of information collection and control, the management of information data can be well completed. In addition, it is also necessary to ensure that in the process of collecting project cost information, the project cost information management platform can be well used to realize the internal collection work, and combined with the way of external collection, In order to form a comprehensive cost information integration processing. In the process of inputting basic data, we should pay attention to the true reliability of information. For external collection, it is also necessary to ensure that the project cost information management platform can effectively link up with some external business software, so that information sharing and exchange processing can be realized. In the process of data mapping and storage, timely analysis and processing of data are realized. Secondly, in the process of different data exchange processing, it is necessary to collect and process its data comprehensively. 3.2 Cost Information Data Release In the construction of engineering cost information platform, it is necessary to analyze its information data on the basis of price information, so that the whole process of construction can be well published and controlled. In this way, reliable data information can be provided in the cost control of its project [3]. In the construction of information platform, we also need to actively analyze its data and information, which can perfect the current platform information release process and ensure the timeliness and authenticity of information release. To meet the actual information needs of construction operation management.
Research on the Whole Process Management System Design
493
4 Structure of the Whole Process Management System of Construction Project Cost In the whole process management system of project cost, it needs to combine the actual demand and function to carry on the corresponding rationalization design. Basically contains the server, the data warehouse, the data table storage, the Sqoop data integration and so on content reasonable architecture. Therefore, in order to realize the reasonable design mode, it can be based on the data integration layer, the data storage layer, the data processing analysis and so on as the main center, and then can carry on the comprehensive optimization processing to the platform function very well, and fully guarantees the reasonable configuration of the parameters. In the construction of engineering cost information management platform, we should actively guarantee the data integration layer, which can realize the design of data information base, cost index, industry data and so on. In the process of data integration, the information processing of project cost can be realized well. For the data storage layer, it is an effective way to use file distributed storage to classify and process its data information effectively. After the comprehensive classification of its information, some cost information should be analyzed. In this way, we can build the corresponding information data warehouse based on some existing data information. Under this kind of warehouse design, can realize the data comprehensive integration. Figure 1 below shows the organizational structure of the platform.
Fig. 1. Platform organization framework
4.1 Engineering Data Standards In order to ensure the stable operation of the project cost information management platform, it is necessary to ensure the rationality of the engineering data standard. In the
494
L. Wang
process of overall management, the most important problem is the problem of information island, which makes it necessary to distinguish the project cost in the process of sub-project measurement and in some calculation and control processes. Then the data can be well managed and the time and cost of information data processing can be reduced to the greatest extent [4]. For example, the graphic and text of engineering materials can be represented well, so that the unified standard can be determined by using various engineering data formulas. ⎡
w11 w12 ⎢ w21 , w22 ⎢ W =⎢ . .. ⎣ .. . w81 , w82
⎤ · · · w14 · · · w24 ⎥ ⎥ ⎥ ⎦ , · · · w84
(1)
4.2 Build a Distributed Database In the process of constructing the project cost information management platform, it is often necessary to construct a distributed database, so that it can deal with and analyze different data information well. Besides, it is necessary to use the SQL database actively to construct the information of the project cost. It should be noted that in the process of constructing the platform, we also need to pay special attention to the supervision of data information. ui iffit (ui ) < fit(xi ) (2) xi = xi else
4.3 Digging of Project Cost Data In order to give full play to the role of the whole process management platform of engineering cost information, it is necessary to improve the analysis ability of its data analysis, so as to excavate the intrinsic value of information. In the process of operation, it is mainly to perfect and solve its effective value, so that it can analyze and process its data mining in the process of data analysis. Fully ensure that in the process of data information processing, can play a good role in the value of information. 4.4 Visual Information Mining Processing In the current process of data analysis, in order to improve the ability of information analysis, scientific and reasonable data analysis can be carried out in the distributed database based on visual analysis. In order to provide more comprehensive information data for relevant staff and reliable data information in some important cost decisions. This kind of information technology uses, very satisfies the present project cost demand.
Research on the Whole Process Management System Design
495
5 Conclusion To sum up, in order to improve the rationality of project cost in the future development process, it is necessary to make active use of the construction mode of information management platform, so as to effectively target all kinds of data information in the construction process.
References 1. Wu, J.: A study on the evaluation method of environmental pollution prevention and control project cost in construction. Constr. Environ. Sci. Manag. 45(03), 190–194 (2020) 2. Li, C.: Analysis on countermeasures of construction cost audit in market economy. Environ. China Constr. (02), 64–65 (2020) 3. Yang, T.: Analysis on the countermeasures of construction cost audit in market economy environment. Technol. Econ. Market (01), 32–34 (2019) 4. Liu, F.: Analysis on audit problems and countermeasures of construction engineering cost in market economy environment. Hous. Real Estate (05), 23–24 (2019)
Research on Rural Health Care Industry Based on Big Data Computing Mengmeng Sun(B) and Xiuxia Wang Shandong Institute of Commerce and Technology, Shandong 250013, China
Abstract. The development of rural health care industry is an important pillar to achieve the strategic goal of Rural Revitalization. The development of health care industry is the sustainable development strategy of rural areas, and has gradually become a hot issue in recent years. This paper uses the econometric method and CiteSpace software to visually analyze 214 articles about agricultural enterprises in CSSCI database, in order to show the development context and future trend of rural health care research. It is found that in the research of rural industry, there are many research papers in the field of economic management, focusing on the management performance of agricultural enterprises, mainly exploring the impact of internal and external factors of enterprises on performance; However, there are few research papers from sociology, politics and other disciplines. In addition, the research on agricultural enterprises lacks the research on the type differentiation of rural health care industry. Therefore, the segmentation and cross research of rural health care industry will be the research direction in the future. Keywords: CiteSpace · Rural health care industry · Visual analysis
1 Introduction With the wide spread of green health concept, rural tourism industry has developed rapidly. In 2018, more than 1.6 billion people participated in rural experience and health tourism. Rural health care industry is to provide products and services, mainly based on healthy agricultural products and agricultural scenery, or forest, animal husbandry, fishery and other integrated industries with health care properties and providing raw materials for health care industry. For example, agricultural sightseeing, rural leisure, fruit tree planting, etc. In the context of healthy China strategy, rural health care industry as an important part of the construction of ecological civilization has attracted much attention. In 2017, the central “No.1 Document” clearly proposed to promote the integration of agriculture, forestry, health care, tourism and other industries. In the academic field, rural health care has also become one of the research hotspots in recent years, many scholars have invested in rural health care related research and achieved certain research results [1]. CiteSpace software focuses on mining key nodes in the development process of discipline, tracking research hotspots and exploring new trends of discipline research. It © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 496–501, 2022. https://doi.org/10.1007/978-981-16-5857-0_64
Research on Rural Health Care Industry Based on Big Data Computing
497
can provide multi-functional interpretation network model, including cooperative network analysis, keyword co-occurrence and clustering map. Its convenient and advanced functions and clear analysis results have attracted much attention in bibliometrics. In recent years, many scholars have applied it to tourism research, including bibliometric analysis from macro perspectives such as tourism management and tourism value chain to visual analysis of specific tourism types such as high-speed rail tourism and rural tourism. With the help of visual analysis software CiteSpace, this study systematically combs the current situation, hot topics and future directions of domestic rural health care research, aiming to promote the steady development of domestic rural health care industry research [2].
2 Data Sources and Research Methods 2.1 Data Sources In order to comprehensively analyze the research trends and development frontiers of rural health care industry in China, the literature data of the research is from the core collection of periodical database of CNKI database, and the retrieval time is April 1, 2019. At the same time, in order to ensure the integrity and authority of the data, we selected SCI source journals, EI source journals, core journals, CSSCI and CSCD journals, retrieved 990 articles, excluded meeting abstracts, reports, news, information and other records, screened 901 related articles and saved them in the format of refworks. 2.2 Research Ideas and Methods CiteSpace V5.2 software is used for knowledge mapping analysis, and frequency and centrality are selected as the main discrimination indicators. It is generally believed that if centrality is greater than 0.1 and citation frequency is higher, the node is the key 1–1. Firstly, the paper analyzes and processes the time change law of the database literature, the publishing organization, the number change of the author’s published papers and the cooperation relationship visually. Then, it uses the method of keyword clustering analysis and LLR algorithm identification to explore the hot spots in this field. Finally, it selects the keywords of beautiful countryside, urban-rural integration, rural tourism, rural industry and so on, Second co-occurrence analysis with rural health care, drawing a series of visual knowledge map, exploring the evolution trend of Rural Revitalization frontier.
498
M. Sun and X. Wang
3 Literature Characteristics of Rural Health Care Research 3.1 Time Distribution and Subject Distribution of Literature The change of the number of papers (Fig. 1) to a certain extent reflects the development process of rural health care research in China. In the 1980s, forest bath rose in Germany, Japan, South Korea and other countries, and some domestic scholars began to realize the important role of developing forest bath. However, limited by the level of agricultural economic development and theoretical understanding at that time, the relevant research was intermittent, and there was no good research results, and the development was very slow for a long time. After entering the 21st century, rural health care industry has received more attention. Forest recuperation and forest health care are gradually developing to rural health care, and the domestic attention to rural health care is increasing. In 2015, the national 13th Five-Year plan outlines the development of eco health care services and products. In 2016, general secretary Xi Jinping addressed the national health and health conference, and urged that we should accelerate the construction of healthy China and promote the rural health care industry from the national level. Since then, rural health care has been widely concerned, and the research on the connotation and development mode of rural health care has been growing rapidly. Rural health care has become a hot topic in domestic academic research. According to the statistical analysis of discipline distribution, the related research literature of rural health and maintenance mainly comes from 15 disciplines, including forestry and tourism, which are interdisciplinary fields such as agricultural economy, service economy and medicine (based on the subject classification of CNKI), but the number of interdisciplinary literature is relatively small, and the interdisciplinary communication research is not deep enough, The development of rural health care industry involves the integration of agriculture and forestry, culture, medicine and tourism, so in the process of theoretical system construction, we should pay more attention to the interaction between disciplines [3].
Fig. 1. Time distribution of research literature on rural health care industry
3.2 Visual Analysis Key words are the accurate generalization of literature, and high-frequency keyword analysis can directly reflect the subject content and hot issues in the academic field.
Research on Rural Health Care Industry Based on Big Data Computing
499
The indicators to measure the importance of keywords include frequency and centrality. The higher the frequency, the higher the attention of keywords; The centrality value is positively correlated with the node importance. When the centrality value is greater than or equal to 0.1, the node plays an important role in the research field. The network type is set as “keyword”, the data selection object is Top50, the reading value is (2,1,15), (2,1,15) (2,1,15), and the time is divided into one year to build the domestic rural health keywords co-occurrence map. In order to make the research results more clear, Fig. 3 is obtained by Photoshop CS5 software processing on the basis of the original figure. The growth ring color of the node in the figure corresponds to the year when the keyword appears. The darker the color is, the closer the year is. Among them, the node of “rural health care” is the largest, which indicates that “rural health care” as an emerging industry has received a lot of attention from domestic scholars in recent years. The color transition of “farmhouse” node is significant, which indicates that the theme of “farmhouse” appeared earlier, Related keywords with obvious intermediary centrality also include “rural pastoral”, “rural environment”, “rural tourism” and so on [4]. In order to more clearly reflect the branch composition of the research field of rural health care industry, cluster analysis of keywords is carried out, and the cluster view is selected on the basis of the original operation to construct the domestic rural health care keyword cluster map. Figure 2 is obtained by Photoshop CS5 software. The module value (q) in the figure is 0.7028, which indicates that the clustering network structure is significant, and the average contour value (s) is 0.8747, which indicates that the effect of the keyword clustering map is good and reliable, and six clusters are generated, including “farmhouse entertainment, agricultural tourism, rural leisure, fruit tree planting, rural pastoral, rural tourism”.
Fig. 2. Key word clustering map
500
M. Sun and X. Wang
Fig. 3. Photoshop CS5 software processing
4 Hot Spot Analysis Domestic practice and exploration research: relevant studies show that Sichuan, Hunan and other provinces take the lead in carrying out the practice of rural health care industry in China relying on the advantages of forest resources. Some scholars take the farmhouse entertainment in these provinces as an example to explore the practice road of rural health care industry and put forward corresponding development suggestions. Among them, he Binsheng, He Wei and Zhang Wei put forward accurate positioning, pay close attention to base construction, improve supporting facilities, innovate mechanism and strengthen marketing, and stressed that the development of rural health care should highlight the local characteristic park. Li Ziwen and Peng Luming, after analyzing the resources of the scenic spot and the current situation of tourism development, also think that we should combine the local characteristics of landform and resources, and focus on creating health care tourism products in line with the rural characteristics. These case studies, to a certain extent, play a role of demonstration and promotion for other scenic spots with the development potential of rural health care. However, at present, most of the research cases are forest parks and nature reserves, and less attention is paid to the construction of suburban rural health care industry. On the other hand, the existing case points are not iconic enough, and the relevant experience still needs to be accumulated in the later industrial development. In view of the shortcomings of domestic rural health care practice, some scholars believe that while learning from international experience, we should strengthen the guidance of industry classification and project zoning, improve the relevant standard system, and establish a system. Some scholars also put forward the need to form a policy support system in planning, science and technology support, loan financing and other aspects, encourage public-private cooperation, franchise investment and financing and other ways to promote joint construction, and improve the construction of community cooperation and sharing mechanism.
Research on Rural Health Care Industry Based on Big Data Computing
501
5 Conclusion Based on 255 articles of CNKI and CiteSpace 5.1.r4, this study finds that: firstly, from the perspective of literature characteristics, the relevant forest bath literature first appeared in 1984, and then the relevant research was stagnant for a period of time. After 2016, the rural health literature showed a blowout growth, and the future related research will gradually grow. Rural health care is the integration of agriculture and forestry, culture, medicine and tourism. In the future, it is necessary to strengthen the exchange and interaction of disciplines. Most of the literatures are published in general journals, and the quality of relevant literatures needs to be improved. Secondly, from the perspective of visual analysis, “farmhouse entertainment, rural environment, rural tourism, industrial development” and other keywords are important nodes connecting different research hotspots, representing the key words of each stage of rural health care industry research. The hot spots of rural health care research focus on the exploration of rural health care practice, the connotation of rural health care, rural health care tourism and the development of rural health care industry. At the same time, it is found that many scholars have been involved in rural rehabilitation research, but there is no representative core author group.
References 1. Wu, H.J., Dan, X.Q., Liu, S.H., et al.: Forest rehabilitation: concept connotation, product type and development path. J. Ecol. 37(7), 2159–2169 (2018) 2. Cheng, Q., Zhou, L.: The research status and path of microblog information aggregation in China – based on the visual perspective of CiteSpace. Mod. Intell. 37(3), 153–160 (2017) 3. He, B., He, W., Zhang, W., et al.: Discussion on the development of forest health industry relying on National Forest Park – Taking Sichuan Kongshan National Forest Park as an example. Sichuan For. Sci. Technol. 37(1), 81–87 (2016) 4. Chen, Y., Chen, C., Liu, Z., et al.: The methodological function of CiteSpace knowledge map. Sci. Res. 33(2), 242–2535 (2015)
Product Packaging Design Based on Cognitive Big Data Analysis Li Yaxin(B) , Lu Zheng, and Zhang Fan China University of Geosciences, Hongshan District, Wuhan 430070, China
Abstract. Based on the rapid development of Internet and big data technology, online shopping platform is a collection of big data information. Online shopping market competition is fierce, online shopping commodity packaging design as one of the important competitive advantages of online shopping commodity, big data also plays an important role in commodity packaging design. Through the research of commodity packaging design under big data, it can bring inspiration to the design methods and ideas of online shopping commodity packaging. This paper emphasizes the importance of big data in online shopping commodity packaging design; On the other hand, through the use of big data in online shopping commodity packaging design, we can develop new ideas and new methods of online shopping commodity packaging design. Keywords: Big data · Online shopping goods · Packing design
1 Introduction With the rapid development of Internet, cloud computing and Internet of things, mobile devices, computer devices and wireless sensors are generating and recording data all the time, and the data information begins to show an explosive growth trend“ According to the monitoring statistics of IDC, the global data volume has reached 1.8 zb (1 zb is equal to 1 trillion GB) in 2011, and it will double every two years. It is estimated that the global data volume will reach 40 zb by 2020. It can be seen that human society has become a digital era, namely “big data era”. Big data is a disruptive technological revolution, which will have a huge impact on enterprise decision-making, organization and business process, as well as personal lifestyle. Packaging is a kind of business culture, what kind of business model, will produce the corresponding form of packaging. The development of the Internet has changed the traditional business model, and the emergence and development of online shopping has changed the traditional product packaging design. Now, in the era of big data, the mining, analysis and application of data will bring the innovation of online shopping commodity packaging design thinking. At present, in many enterprises, the use of big data has imperceptibly penetrated into all aspects of online shopping product packaging. The research on the application of big data in online shopping product packaging design is the trend of online shopping product packaging design in the future. In order to © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 502–509, 2022. https://doi.org/10.1007/978-981-16-5857-0_65
Product Packaging Design Based on Cognitive Big Data Analysis
503
solve the current problems of online shopping product packaging design, we will also solve them through scientific methods [1]. In this paper, the practice research of big data guiding online shopping product packaging design is upgraded to the theoretical research, which has practical significance to promote the development of online shopping product packaging design.
2 The Concept and Characteristics of Big Data Big data is an abstract concept, it is a massive data set, and can not be processed by conventional software to play its value. It includes two aspects of connotation: first, big data is a variable, with the passage of time, the development of science and technology, big data will become bigger and bigger; Second, the value of big data does not lie in the huge data, but in the professional processing of data, so as to play the great value of big data. Based on the definition of big data, it is generally believed that big data has four basic characteristics: large volume, variety, speed and low value [2]. 2.1 Large Scale of Data Big data is a collection of massive data, and is growing rapidly. There are many reasons leading to the expansion of data scale, among which the biggest factor is the development of the Internet, the continuous growth of network users, the continuous improvement of Internet technology, the computer can successfully capture online information and record a large number of data information. In the past, people used to obtain data mainly through survey and sampling methods. At this time, the amount of data obtained is limited due to the influence of human, time, space, economy and other factors. Nowadays, the keywords that people search in any search engine can be recorded as data information. In contrast, the amount of data in the Internet era is so large, The Internet era can also be regarded as the era of data. Now in the era of big data, data has affected all aspects of our lives. People begin to realize the importance of data, and the ability of data acquisition, processing and analysis has been greatly improved. 2.2 Many Kinds of Data There are many kinds of data. The most common types are structured data and unstructured data. Structured data is the traditional form of data, which is defined in advance and stored in the form of table. Structured data is defined as popular and standardized data, which is convenient for human and computer to store, process and query. The unstructured data is due to the development of the Internet, users and institutions produce a large number of digital information such as video, audio, text, pictures and so on. It is not controlled by the computer and human, because each upload information terminal is an independent individual. Therefore, it is difficult to store and process them in a unified way. However, at present, unstructured data accounts for more than 75% of the total data, and its growth rate is 10 times that of structured data. Due to the variety and scale of unstructured data, it is the product of the times, and also provides unlimited value for the development of society.
504
L. Yaxin et al.
2.3 Fast Data Processing Speed The key value of big data lies in the ability to process data quickly, which is also the biggest difference between big data and traditional data. With the development of cloud computing, Internet of things technology, various sensors and radio frequency technology, the release, recording and generation of data are becoming more and more simple. Massive data information brings challenges to data processing. The explosive growth of big data requires the corresponding acceleration of data processing speed in order to quickly and effectively mine useful data information and reflect the value of big data. In addition, the big data of the network is dynamically updated, and its timeliness requires that the data processing should have timeliness. The real-time processing of data is an inevitable problem of big data processing technology, so that the data will lose its value. For example, in the field of e-commerce, the daily data is updated in real time, which requires enterprises to obtain and process data quickly in the dynamic data stream in order to obtain the most valuable commercial information, so as to make business decisions quickly and seize the market opportunity.
3 Analysis on the Current Situation and Problems of Packaging Design of Online Shopping Goods 3.1 Packaging Design Status of Online Shopping Products (1) The design tends to be concise. In the face of fierce online competition, online shopping commodity packaging is developing rapidly, and the difference between its design and traditional packaging design is increasingly obvious. The display of online shopping commodity packaging has greater advantages than the traditional shelf display. It can display commodities and packaging in an all-round way, and even display them with multimedia video and dynamic pictures. Its effect is better. Therefore, consumers can easily choose the products they need according to the product display. (2) More emphasis on visual effects. There are many kinds of online shopping platform products, and the product arrangement and display effect are similar. When consumers purchase goods, the browsing time is often fragmented and short, and the visual experience has become one of the important decisive factors for consumers to purchase goods in the full range of goods. Therefore, the visual display effect of online shopping goods packaging in online stores also determines the purchasing power of consumers. Due to the virtual nature of online sales, the visual display of online shopping commodity packaging is divided into packaging design renderings and product packaging photography. Because of the cost problem, the display pictures of online shopping commodity packaging are often the photos of commodity packaging design. As shown in Fig. 1, wine packaging needs to reflect the texture of glass to improve the quality of products. At this time, photography can solve this problem. At the same time, high-definition pictures can fully show the details of product packaging and have strong visual impact. (3) Emphasis on interaction and experience design. More and more attention is paid to the interactive experience with consumers in the sales packaging of online shopping
Product Packaging Design Based on Cognitive Big Data Analysis
505
goods. When consumers get online shopping goods, it is the beginning of the interactive experience. With the rapid spread of network information, word-of-mouth marketing has always been an important factor in the sales of online shopping products, so a good online shopping product packaging experience design is the key to word-of-mouth marketing. In the sales packaging design of online shopping commodity packaging, we also pay attention to the interaction with consumers.
Fig. 1. Red wine packaging design
3.2 Problems in Packaging Design of Online Shopping Products (1) The display effect is not good. Due to the virtual nature of online shopping environment, consumers can not touch the real goods packaging, and can only display the goods packaging through online shopping platform, but the display effect is not good. First of all, due to the difference of terminal media display screen and the influence of specific shooting angle, shooting background, shooting content, shooting method and other factors, the final product packaging of online shopping platform is quite different from the real product, especially in color, size and other visual differences, it is difficult to completely restore the real product packaging, thus affecting consumers’ purchase decision; Secondly, the packaging display of online shopping goods has more promotion information and brand information, which weakens the display effect of packaging; Finally, the different display form, image size and resolution of each shopping website result in unclear packaging pictures and difficult to reflect the packaging details, which affects the overall display effect of commodity packaging. (2) Brand recognition is low. Packaging has the functions of protecting goods, promoting sales and facilitating circulation, and it is also the builder of corporate brand culture. With the development of e-commerce, logistics packaging has become a powerful propaganda element of enterprise brand image, which is easy to make consumers form the first impression of products and enterprise brand. The fierce competition of online shopping products and the plagiarism and imitation of online shopping product sales packaging make businesses pay more and more attention to
506
L. Yaxin et al.
the brand protection of online shopping product packaging and increase the strength of brand publicity. However, the brand publicity and brand recognition of online shopping product packaging are still low. (3) The protection function is not perfect. The protection function is the basic function of commodity packaging, which protects the commodity from the damage and influence of physics, chemistry, mechanics, weather and regional environment in the process of circulation. In the process of online shopping, the protection function of packaging is magnified. From placing an order online to receiving express delivery, the most important thing for consumers is the integrity of goods, which directly affects the credibility of businesses and consumer satisfaction. Therefore, the distribution and support of logistics transportation is the prerequisite for the realization of online shopping. But the protection function of online shopping commodity packaging is not perfect, especially in the logistics packaging. According to the big data survey, in Fig. 2, 25% of consumers’ complaints about express delivery damage have become the second largest factor of express delivery complaints in addition to service attitude [3]. (4) The packaging design is not standardized. Commodity packaging should fully understand the attributes of commodities, and carry out packaging design according to commodity characteristics and sales environment. Online shopping commodity packaging often appears the phenomenon of nonstandard design, the classification of goods is not clear, and the characteristics of goods are not understood, which leads to the unreasonable selection of packaging materials. In addition, the complicated and irregular structure design increases the difficulty of logistics transportation and storage. The important instructions on logistics packaging are not clear, which leads to the damage of goods in logistics transportation.
Fig. 2. Investigation on the causes of consumers’ complaints against express companies
Product Packaging Design Based on Cognitive Big Data Analysis
507
4 Research on Online Shopping Commodity Packaging Design Based on Big Data 4.1 Big Data Guide Online Shopping Commodity Packaging Design According to the characteristics of online sales, online shopping commodity packaging can be divided into sales packaging and logistics packaging. Sales packaging is a kind of packaging for the purpose of sales. In the process of online shopping, sales packaging directly affects consumers’ purchase decisions. Good sales packaging is not only convenient for consumers to identify products and enhance brand image, but also can attract consumers’ attention and stimulate consumption. Logistics packaging is the packaging design for logistics transportation, which needs to consider a series of processes such as transportation, stacking, storage, sorting and so on. The logistics packaging of online shopping goods requires high safety performance, which is the trade link between online shopping consumers and businesses. The design of logistics packaging determines whether the goods arrive at the hands of consumers safely, which is an important design content in the packaging design of online shopping goods. The traditional packaging design scheme is based on the market research according to the subjective intention and personal experience of the designer. Now the whole process of packaging design has real-time data feedback. Under the guidance of big data, the packaging design work is more scientific and accurate. It specifically includes four parts: theoretical analysis, data research, data collection, data collection and data collection Data analysis and design guidance [4]. 4.2 Materials and Technology Packaging material is the material selection to meet the packaging needs from the packaging container, decoration, printing, transportation and other aspects, specific to different categories, according to the characteristics of the product to use the corresponding packaging materials. At this stage, we can accurately grasp the market and consumers’ recognition and preference for materials through data research and analysis. At present, the more commonly used commodity packaging materials in the market are paper, plastic, metal and glass. According to the big data survey, Fig. 3 the sampling survey data of online shopping commodity packaging materials show that paper packaging accounts for 29%, plastic packaging accounts for 35%, metal packaging accounts for 15%, glass packaging accounts for 16%, wood packaging accounts for 3%, and other materials packaging accounts for 2%. 4.3 Modeling and Structure Online shopping product packaging modeling and structural design can also be optimized according to the data. In the tmall platform, the top 20 food sellers in terms of sales volume were selected for research. As shown in Fig. 4, 43% of the food inner packaging adopts double-layer and multi-layer packaging, and 57% adopts single-layer packaging; 77% of them were packed in square shape (the size of each square varies), and 23% in cylindrical shape (the diameter and height vary greatly).
508
L. Yaxin et al.
Fig. 3. Sampling survey of packaging materials for online shopping
Fig. 4. Packaging modeling and structure pie chart
4.4 Visual Design The visual design of commodity packaging is one of the important contents of commodity packaging design. It is the organic organization of various design elements according to the law of formal beauty. Finally, the design elements and various information are reasonably displayed to consumers. The visual design of commodity packaging mainly includes graphic design, color design and text design. The visual design of online shopping commodity packaging combines the characteristics of online shopping with the visual design of offline traditional commodity packaging, which has both similarities and obvious differences. Designers should study it pertinently when designing online shopping commodity packaging.
5 Conclusion The mining, analysis and application of big data has become a strong competitiveness of enterprises. Big data not only has an important impact on network marketing, but also affects the packaging design of online shopping goods. In the competitive online shopping market environment, enterprises publicize brand culture and enhance commodity competitiveness through online shopping commodity packaging design. As a collection
Product Packaging Design Based on Cognitive Big Data Analysis
509
of big data information, online shopping platform provides a large amount of data information for enterprises and designers, which is a rich resource treasure house for early Market Research of online shopping commodity packaging design.
References 1. Chen, L.: Packaging Design. China Youth Publishing House, Beijing (2006) 2. Tu, Z.: Big Data. Guangxi Normal University Press, Guangxi (2013) 3. Xuan, W.: Network marketing mode under the background of big data. China Sci. Technol. Inf. (17) (2015) 4. Wei, J.: Packaging Design. China Construction Industry Press, Beijing (2010)
AI-Assisted Cognitive Computing Approaches
Establishment of Economic Term Bank Under the Background of Artificial Intelligence Lingzhi Hu(B) Jiangxi University of Applied Science, Nanchang, Jiangxi, China
Abstract. Economic terminology is the most cores and basic part of economics, and it is the product of knowledge precipitation in the field of economics. The construction and application of term base is one of the necessary links in largescale translation projects. As an important, professional, systematic and complete knowledge application platform, the establishment of term base is very important. Under the background of artificial intelligence, this paper studies the establishment of economic terminology database. In the process of research, this paper uses the control experiment method, that is, under the same conditions, under the background of artificial intelligence, the use of artificial intelligence technology to establish the economic term library and the use of traditional methods to establish the economic term library, and compares the two methods, mainly compares the establishment process and application effect of the two methods. The results show that under the background of artificial intelligence, the average score of the economic term database established by artificial intelligence technology is 94.33, and the average score of the economic term database established by traditional methods is 81.67, with a difference of 12.66. It can be seen that the overall application effect of the economic term database established by artificial intelligence technology is better than that established by traditional methods the application effect of the corpus. Moreover, the efficiency of establishing economic term bank under the background of artificial intelligence is higher than that of traditional methods. Keywords: Artificial intelligence · Economic terms · Term base · Application effect
1 Introduction Following the mobile Internet, emerging technologies such as big data, Internet of things, blockchain, cloud computing and artificial intelligence have developed rapidly and become the new focus of current international competition [1, 2]. Artificial intelligence technology provides intelligent services for efficient production and high-quality life of human beings by simulating human thoughts, actions and thoughts similar to human beings [3]. With the rapid development of artificial intelligence technology, it has gradually penetrated into various fields. It can be said that artificial intelligence has been applied to all aspects of our life [4]. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 513–520, 2022. https://doi.org/10.1007/978-981-16-5857-0_66
514
L. Hu
With the rapid development of language service industry in the information age, the application of computer-aided translation in translation practice is more and more extensive. In this way, how to ensure the accuracy and consistency of terminology translation, in order to improve the quality and efficiency of translation, will be an urgent and important problem [5, 6]. We know that the establishment of term bank plays an important role in this respect, which can not be ignored. Terminology is the conceptual representation of knowledge accumulation in the development of subject theory and practice. The system of subject terms can intuitively present the genealogy of subject knowledge. The genealogy of subject knowledge can not only describe the development framework of subject knowledge, but also promote the development of knowledge discourse of the subject [7, 8]. Economic terms have many characteristics, including brevity, unity, topicality and translatability, and are influenced by many factors. The establishment of its lexicon is a serious matter [9, 10]. Under the background of artificial intelligence, it is of great significance to study the establishment of economic terminology database for the rapid translation of economic terms. This paper first describes the characteristics of economic terms, and introduces the key points of the construction of economic term library, and then uses the method of control experiment to establish the economic term library with artificial intelligence method and traditional method under the background of artificial intelligence, and compares the establishment process and application effect of the two methods [11]. The results show that under the background of artificial intelligence, using artificial intelligence technology to establish economic term database not only has high efficiency, but also has good application effect [12].
2 Economic Terms and the Establishment of Term Bank 2.1 Characteristics of Economic Terms Economic term is a concept and language expression with unique characteristics or meanings in the process of rational people’s balancing in social relations, producing various goods and providing various services with scarce resources, and distributing or providing goods and services to various members or groups of society for consumption now or in the future. In addition to the generality of these terms, economic terms have the following characteristics. Intuitiveness. There is a close relationship between economics and mathematics. Many viewpoints or ideas are explained through mathematical concepts or mathematical coordinate axis to enhance the credibility of the analysis results. For example, vertical intercept, horizontal intercept, demand curve, price floor, price ceiling, shifts in demand curve and production possibility frontier. These economic terms, expressed by mathematical concepts or coordinate axis, can make readers form intuitive graphics in their minds, so as to quickly understand the economic concept. Naming. Economics is a social science introduced into China from the west, and the Chinese version of western economics textbooks is the most frequently used one in China. At present, many theories and concepts in the field of economics are proposed by western scholars, so many related terms are directly named after the proposer. For
Establishment of Economic Term Bank
515
example: Giffen good, Coase theorem, Condorcet paradox, Nash equilibrium, Ricardian model and Taylor’s rule. Antithesis. “The meaning of words, simply speaking, means that the concepts of words contradict and contradict each other, that is, the concepts expressed in words contradict or contradict logically”. Therefore, in economic work, we usually need to compare economic relations with the help of a group of contradictory and opposite things, so terms often appear in pairs. For example: Law of demand and law of supply, explicit cost and implicit cost, loss of exporting country and loss of importing country). These terms have conflicting or opposite meanings. Seriality. Seriality is also relatively simple to understand. In short, a new word will have a chain reaction after entering the economic term, resulting in a series of terms related to the central word. For example, marginal and production bring out a series of economic terms, such as marginal benefit, marginal changes, marginal cost curve and marginal product. 2.2 Key Points of the Construction of Economic Terminology Database Term bank, one of the functions of CAT tools, is widely used in translation projects because of its time-saving, labor-saving and unified terminology. Term base is an efficient and convenient tool for project management, which plays an irreplaceable role in optimizing project management and promoting terminalization. The following points should be paid attention to in the establishment of the term bank of economics. It is very important to build a term bank and collect the materials. The material of terminology database plays a decisive role in the quality of terminology database. The collection of materials should pay attention to the following aspects. First of all, we should collect a certain amount of material as the basis for the establishment of terminology database. It is difficult to ensure the accuracy and completeness of the term library without enough language materials. Therefore, in the process of establishing the term base, we should divide the work of the project and collect the term in a targeted and planned way. The software proficiency of project members determines the efficiency of project implementation. Because the establishment of term bank needs many kinds of computer software. Therefore, the relevant project team members need to be proficient in using the relevant software. If the software is not used skillfully, it will affect the project progress, or improper software operation will cause the loss of data. Therefore, the project team leader should ensure that the selected project members have a certain degree of software operation ability at the initial stage of the project establishment, and should conduct pre project training for the project members to ensure the normal and smooth implementation of the project. Multiple proofreading is the key factor to ensure the quality of term base. The term bank will provide reference for translators, so the accuracy of term bank is very important. The inaccuracy of term base will directly lead to the inaccuracy of translation. Therefore, in order to ensure the accuracy of the term base, three reviews should be carried out. The first review should check whether there is any omission to ensure the integrity of the term. The second review ensures the accuracy of terminology translation, and the
516
L. Hu
reviewers check the quality of terminology. The third review ensures that there are no duplicate terms in the term library. The function of finding duplicate terms in the term library is used to display duplicate terms and delete them manually. Force majeure is also one of the important factors. The establishment of term base depends on the software operating system. In the process of operation, software crash, computer system crash and data loss are inevitable. Therefore, such force majeure factors should be fully considered in the establishment of term base, and data backup should be carried out regularly to avoid data loss and unnecessary loss. The overall planning of project manager is the effective guarantee of project implementation. The establishment of terminology database should be carried out in an orderly way, which requires the project manager to make an overall plan for the project implementation. Firstly, the project division of labor is very important, and the personnel who are good at finding resources, technical operation and proofreading should be allocated reasonably to achieve the optimal allocation of resources. Secondly, the task time node should be defined and the task completion should be checked regularly. Finally, the project management system should be established, for any emergency, we should design a good response plan, such as software crash, computer crash, etc. In a word, the process of establishing the economic term bank needs the guidance of the project manager and the active cooperation of all project members to complete the effective establishment of the economic term bank.
3 Experimental Method and Evaluation Index 3.1 Experimental Methods Under the same conditions, the establishment process and application effect of the two methods are compared. 3.2 Research Indicators Establishment phase. This paper compares the two stages of the establishment of terminology database, i.e. term recognition and extraction, and analyzes the establishment efficiency of these two stages, highlighting the advantages of using artificial intelligence technology to establish economic terminology database under the background of artificial intelligence. Application effect. In this paper, the two methods are applied under the same conditions, and the application effect is evaluated. In the evaluation, we select translation accuracy, translation quality and translation speed as the evaluation indexes, in which the total score of each index is 100 points, and 85 points or more is excellent.
4 Results of the Establishment of Economic Terminology Database under the Background of Artificial Intelligence 4.1 Analysis on the Establishment of Economic Terminology Database The establishment of term database can be divided into two steps: term recognition and term extraction. Then, compare the efficiency of traditional term base establishment with
Establishment of Economic Term Bank
517
that of economic term base establishment using artificial intelligence technology under the background of artificial intelligence. The results are shown in Table 1 and Fig. 1. Table 1. Establishment efficiency of economic term bank Steps of establishing term base Identification of terms
The establishment of term bank of traditional methodology Economics
The establishment of economic term base in the context of artificial intelligence
Time (s)
Efficiency evaluation
Time (s)
Efficiency evaluation
Term extraction
5
Slow
0.06
High
Steps of establishing term base
4
Slow
0.05
High
The establishment of term bank of traditional methodology Economics The establishment of economic term base in the context of artificial intelligence 0.08
6
0.06
4
0.04
3 2
Time (s)
Time (s)
5
0.02
1 0
0 Term extraction
Steps of establishing term base
Steps of establishing term base Identification of terms Fig. 1. Comparison of the efficiency of establishing economic term bank
It can be seen from Table 1 and Fig. 1 that under the background of artificial intelligence, the efficiency evaluation of using artificial intelligence technology to establish the economic term database in the term recognition stage is high, and the time is only 0.06 s, while the efficiency evaluation of using traditional methods to establish the economic term database in the term recognition stage is slow, and the time is up to 5 s. In addition, in the stage of term extraction, under the background of artificial intelligence, the efficiency of using artificial intelligence technology to establish economic term database is high, and the time is only 0.05s, while the efficiency of traditional method is relatively slow, and the time is as long as 4S. Therefore, the efficiency of using artificial intelligence technology to establish economic term database under the background of artificial intelligence is relatively high.
518
L. Hu
4.2 Comparison of the Application Effect of the Establishment of Economic Terminology Database The two methods are used to establish the economic term database, and the application effect is compared. The application effect is carried out in the way of scoring. The results are shown in Table 2 and Fig. 2. Table 2. The application effect of the establishment of economic term bank Application effect
The establishment of term bank of traditional methodology Economics
The establishment of economic term base in the context of artificial intelligence
Accuracy of translation
85
93
Translation quality
82
94
Translation speed
78
96
The establishment of term bank of traditional methodology Economics The establishment of economic term base in the context of artificial intelligence 120
Evaluation score
100 80 60 40 20 0
Accuracy of translation
Translation quality
Translation speed
Application effect Fig. 2. Comparison of the application effects of the establishment of the economic terminology database
It can be seen from Table 2 and Fig. 2 that under the background of artificial intelligence, the application effect of economic term database established by artificial intelligence technology is better than that of economic term database established by traditional methods. Among them, in terms of translation accuracy, the score of the traditional economic term database is 85, the score of translation quality is 82, the score of translation speed is 78, and the average score is 81.67. In the evaluation of the application effect of the economic term database established by using artificial intelligence technology
Establishment of Economic Term Bank
519
under the background of artificial intelligence, the score of translation accuracy is 93, the score of translation quality is 94, the score of translation speed is 96, and the average score is 94.33.In general, under the background of artificial intelligence, the average score of the application effect of the economic term database established by artificial intelligence technology is 12.66 points higher than that of the economic term database established by traditional methods. It can be seen that under the background of artificial intelligence, the application effect of the economic term database established by using artificial intelligence technology is good.
5 Conclusions Economic terminology is the carrier of economic theory and concept, the crystallization of its subject knowledge, and the basis of its subject development and in-depth research. Nowadays, with the rapid development of science and technology, artificial intelligence technology has widely affected our lives. In this paper, the establishment of economic term bank under the background of artificial intelligence is studied. In the research, we use artificial intelligence technology to establish the economic term database under the background of artificial intelligence, and compare it with the economic term database established by traditional methods, and compare the establishment process and application effect of the two methods. The research results show that under the background of artificial intelligence, the use of artificial intelligence technology to establish economic term database has high efficiency and good application effect.
References 1. Lu, H., Li, Y., Chen, M., et al.: Brain intelligence: go beyond artificial intelligence. Mob. Netw. Appl. 23(7553), 368–375 (2017) 2. Moravík, M., Schmid, M., Burch, N., et al.: DeepStack: expert-level artificial intelligence in no-limit poker. Science 356(6337), 508 (2017) 3. Simon, D., Olga, S.E., Simon, R., et al.: Artificial intelligence-assisted online social therapy for youth mental health. Front. Psychol. 8(June), 796 (2017) 4. Spiro, R.J., Bruce, B.C., Brewer, W.F.: Theoretical issues in reading comprehension: perspectives from cognitive psychology, linguistics, artificial intelligence, and education. Reading Teach. 3, 368–373 (2017) 5. Semenova, S.N., Aksyutenkova, L.G.: Cognitive-pragmatic interpretation of linguistic personality on the example of market-economic terminology. RUDN J. Lang. Stud. Semiot. Semant. 11(4), 760–774 (2020) 6. Ahibalova, T., Miroshnychenko, V., Plotnikova, N.: Economic terminology: analysis of translation peculiarities. Hum. Sci. Current Issues 1(29), 10–15 (2020) 7. Husnutdinov, D.H., Sibgaeva, F.R., Sagdieva, R.K., et al.: Stages of formation and development of economic terminology in the conditions of integration into world economic community. Int. J. Civil Eng. Technol. 10(2), 1418–1424 (2019) 8. Silaki, N.: Towards the standardization of economic terminology. Bankarstvo 47(2), 100–107 (2018) 9. Fabijani, I.: English word-formation types in Croatian: the case of morphological adaptation of noun phrases in economic terminology. Engl. Lang. Overseas Perspect. Enquiries 14(2), 9 (2017)
520
L. Hu
10. Nikolayenko, V.: Interactive technologies in the process of learning Russian professional terminology by foreign students majoring in economics. Intercultural Commun. 5(2), 101–117 (2018) 11. Jing, M.: The establishment of the academic language database based on the computer aided translation technology and economy in the background of artificial intelligence. Comput. Knowl. Technol. 16(27), 4–6 (2020) 12. Xuan. J. Econ. (06), 9 (1992). (in Chinese)
Development and Construction of Traditional Apparel Customization App Under the Background of Artificial Intelligence Li Wang(B) National Experimental Teaching Demonstration Center for Fashion Design and Engineering, Dalian Polytechnic University, Dalian 116000, Liaoning, China
Abstract. With the development of artificial intelligence (AI) and the popularization of smart phones, mobile client APP marketing has become a new method for apparel companies to carry out product promotion activities. APP helps customers choose more suitable products and promote consumption. With the continuous improvement of China’s international status, traditional costumes appear more and more frequently in front of people in major events at home and abroad. Traditional costumes are showing the charm of traditional Chinese costumes to people all over the world all the time. With the development of the times, “customized” design has gradually become popular. Among the “ten technologies that will change the future” predicted by American media reports, “personalized customization” is ranked first, and its market position is increasingly recognized by people. The development and construction of traditional apparel customization apps under the background of AI is of great significance. The purpose of this article is to study the development and construction of traditional apparel customization apps under the background of AI. Through market research, questionnaire surveys, and practical reference, this article understands that the target customer group hopes to use APP to assist in the design of traditional clothing. It believes that the quality of traditional clothing customization services is not good, the style is slightly less, and the customization platform is lacking. It hopes to design through the Internet and the client. Ideal for customized T-shirts, and collected from the public the design standards that need to be paid attention to in traditional apparel customization apps. The results showed that in terms of occupational distribution, students accounted for the largest number of students, accounting for 72.8% of the surveyed population, related shop operators accounted for 15.4%, and art lovers accounted for 11.8%. Most people claim that they haven’t used apps similar to traditional clothing. Only 20.69% of people claimed to have used apps with related attributes. Keywords: Artificial intelligence · Traditional clothing · Customized app · Development and construction
1 Introduction Nowadays, people are accustomed to work in Western-style clothes and wedding ceremonies in Western-style wedding dresses [1, 2]. But it can also be seen that more and © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 521–529, 2022. https://doi.org/10.1007/978-981-16-5857-0_67
522
L. Wang
more people choose to wear traditional Chinese costumes for weddings and hold the ceremony in a traditional way. Some companies or organizations also use traditional Chinese methods to design meeting procedures and organize meetings [3, 4]. This is the powerful influence exuded by the precious cultural deposits accumulated in China’s five thousand years of history. The development and construction of traditional apparel customization apps based on AI has received more and more attention [5, 6]. In the research on the development and construction of traditional apparel customization apps under the background of AI, many scholars have conducted research on them and achieved good results. Maity S analyzed the characteristics and composition of the top shape elements in the Baikuyao men’s clothing image, such as the “T” shape of the top, the “Y” shape of the neckline, the method of wrapping each part of the top, and the embroidered parts and patterns. The modeling and decoration methods of women’s clothes [7]. According to Niveditha AS, “Cheongsam, as the name suggests, refers to the clothes of the Eight Banners women before and after the Manchus entered the Pass in the Qing Dynasty. It is the daily clothing of women outside the Guan with Manchus and Mongolians as the main body. This kind of cheongsam is mainly popular in the north, and women in the south Most people still follow the customs of the Ming Dynasty to wear a longer upper gown and a long skirt underneath [8]”. This article uses the questionnaire survey method to mainly study the Chinese traditional costume art, how to use the app system with the characteristics of self-media communication as a carrier of applied research.
2 Development and Construction of Traditional Apparel Customization App Under the Background of AI 2.1 Product Positioning of Traditional Apparel Customization Apps Under the Background of AI The interface design of traditional clothing customization apps under the background of AI first determines the interaction goals and user experience based on the content of traditional Chinese clothing art, builds the information structure according to the core functions, and then designs the visual content. In terms of design priority, interesting user experience is higher than APP product function, function is higher than interaction method and information structure, and interaction method and information structure are higher than interface visual design. In the design process of the interface of the traditional clothing customization APP under the background of AI, the design process focuses on how to convey the connotation of traditional Chinese clothing art, instead of simply pasting a list of traditional elements, but trying to make the charm of traditional Chinese clothing art full Show it out [9, 10]. The users who have a strong interest in traditional Chinese clothing art in this study are all users of different ages, jobs, and genders in the society. The final product design service and a “culture and art circle” full of artistic atmosphere and fun. Users can be college students, elite white-collar workers in society, highly respected masters of traditional costumes, or self-employed shopkeepers who operate derivative products related to traditional costumes. The traditional clothing customization APP under the background of AI researched in this article is built as a communication platform. Each user
Development and Construction of Traditional Apparel Customization App
523
is the successor of traditional clothing art, and is also the center of communication and development of traditional clothing art. Every user can record the brilliant history of traditional costume art through text, photos, and videos. On the whole, the true meaning spreads the traditional costume art culture and develops the traditional costume art culture, which has positive significance [11, 12]. 2.2 Design Elements of Traditional Apparel Customization App Under the Background of AI The design elements of traditional apparel customization apps under the background of AI are to make users more accustomed to using APP products on mobile devices in fragmented time. Mobile device-based apparel art apps are easy to carry around and meet the needs of real-time understanding and participation in the content of the We-media platform. When constructing traditional apparel customization apps under the background of AI, it is necessary to consider the five levels of user experience requirements. The levels are: performance, framework, structure, scope and strategy. The first layer of presentation layer-web page, application interface, composed of pictures and text, these are the presentation layer. The second frame layer-the processing and placement of buttons, controls, pictures and text areas, constitute the frame layer. The third layer of structure-how to organize these functions and content together? How do they work? These constitute the structural layer. The fourth level of scope layer-products should have functions and content and priority of functions, these functions and content together constitute the scope of the site or application layer. The fifth level of strategy-tell users what this product is. Covers two aspects: “project goals” and “user needs”. 2.3 “Self-media Communication” of the Functional Module Planning of Traditional Apparel Customization App Under the Background of AI Prior to this, art APP design focused more on the presentation form, seeking breakthroughs and innovations, and spreading through the use of a large number of 3D synthesis technologies and interactive gesture control. Although this form will be able to vividly present related art works in front of users very intuitively and leave a deep impression on users, it can only record the status quo and cannot disseminate and develop related art and culture. Therefore, this article uses the traditional clothing customization APP under the background of AI as the carrier, combined with the characteristics of self-media communication, through the four modules of “Traditional Costume Museum”, “Forum”, “Meichuang”, and “Offline Activity Center”. Mobilize the interests of users to give full play to their advantages, and ultimately promote the spread and development of Chinese traditional culture. “Self-media Communication” in the Costume Museum Module. This section is designed for users to use touch to move the screen, while moving the screen content
524
L. Wang
to the left or right, at the same time in the upper right corner of the screen can touch to enter the “self-media comments” area. Allow users to express their opinions while appreciating traditional clothing art. Among the target users, there are many researchers with professional knowledge of traditional clothing art who can publish high-level professional and in-depth views. This allows users to comment, forward, and like while browsing the museum of traditional costumes, and to gain a more in-depth and detailed understanding of the cultural accumulation behind the screen. This module integrates the previous UI (User Interface) design experience into it, through the X and Y axis, designed into the form of “world coordinates”, users browse and watch the museum in a “relative coordinate” screen content the content of the module. Fully integrate the characteristics of APP interactive technology, intuitively and simply show the digital “self-media” design in the form of APP. “Self-media Communication” in the Forum Module. The forum module contains the following types of content. Traditional clothing video live post: access related videos in the traditional clothing community, and users in the traditional clothing discussion community can watch the video terminal interaction. Traditional costume picture community: It aims at users to find pictures of interest faster, to create a better environment and experience, and to help better picture content generation and dissemination. The information presentation form of waterfall flow is more intuitively displayed to users, and it better meets the user’s needs for viewing high-definition pictures. Traditional clothing text live posting: Sharing through text or related activities, hot events, celebrity interviews, and real-time user discussions. “Self-media Communication” in the Mechuang Module. The purpose of this section is to allow users to experience and use “self-media” to obtain high-quality content from practice. It is to guide users to interact with the “Meichuang” module in the interface, and to show the “self-media” thinking in a more vivid and three-dimensional manner the main link. Traditional clothing “Meichuang” module: “Meichuang” is different from traditional electronic magazines. The “Meichuang” module in traditional clothing APP meets the dissemination characteristics of professional production content. The Mechuang platform is a gathering place for excellent authors in the traditional clothing art circle. As an ordinary user, you can view the best quality content produced by Mechuang with the simplest operation in the “Meichuang” module. The styles include: video, audio, Long text and light text. Give users a lot of heavy content. “Self-media Communication” in the Offline Activity Center Module. The “Offline Activity Center” module is an offline activity organization tool based on the geographic location of LBS. Through the “offline activity center” module of the traditional clothing customization app under the background of AI, you can recognize nearby users who are also interested in traditional clothing art. You can use the offline activity center module to create and join nearby interest groups, messages, and neighborhoods activity.
Development and Construction of Traditional Apparel Customization App
525
At the same time, the offline activity center module is also a platform for online discounts to purchase traditional clothing-related products. Users purchase their favorite traditional clothing-related products online through the “Offline Activity Center” module. Not only can they get discounts, they can also participate in related experience activities and discounts organized by offline stores, so that users can be immersed in traditional clothing-related online and offline. 2.4 Interactive Design of Traditional Apparel Customization App under the Background of AI Interactivity is the most important feature of traditional apparel customization apps under the background of AI. Therefore, in interactive design, users need to swipe the screen frequently. The ultimate goal of interaction design is still to make self-media communication more convenient and faster. The size of the current smart phone screens has increased substantially in recent years. Originally, it was necessary to rotate the phone when viewing streaming video content. When you continuously watch video content, you need to constantly switch back and forth between vertical placement and shooting and horizontal placement of the phone. This undoubtedly makes users increase the number of operations and reduce usability. Interaction Design in the Forum Module. When a user clicks to enter the APP, the first thing that catches the eye is the user page of the forum module as the core of the traditional apparel APP. It embodies the concept of self-media communication in the traditional apparel APP. The author believes that an overly complex interactive design will increase the user’s cost. At the same time, it will reduce availability. Therefore, the entire forum module only uses the three interactive methods of “click, swipe, and slide”. This will make it easier for users to obtain information and content in the forum module to a great extent, and at the same time enable users to make full use of their scattered time. Self-media communication apps have a large amount of content for users to read and watch. Therefore, in the interactive design, the function of “reading tags” is added deliberately, which uses an interactive method to enable users to record the position of the last refreshed item, and when the user uses it next time, it will directly locate the previous reading to the place. The “Refresh” button tells the user to click on this item to refresh the entire forum module with the refresh symbol, and the page will return to the latest updated content position. There is no need for users to keep on top to watch the latest news, improve user experience while improving usability. Interactive Design of the Traditional Costume Museum Module. The zoom interactive gesture is used in the traditional costume museum module. When the user clicks the museum button in the module selection navigation bar at the bottom of the screen, it will enter the main interface of the traditional costume museum module. The traditional costume museum module is a module designed with the time axis as the prototype. A time axis is the main line from left to right, and the relevant keyword entries are the branch lines. The user can switch between the global view and the partial view by zooming
526
L. Wang
gestures. When you click on a specific entry, you will enter the detailed content of the magnetic stripe. The specific content is arranged in order. At this time, the “discussion” icon will appear in the upper right corner. Users can post their personal opinions in the discussion icon, and can also forward this content, share this content, like this content, and other functions, and even Publish personally shot videos. And, all operations do not need to rotate the phone, including the upper part of the video playback window when playing a video, the lower part of the video playback window will temporarily fade to dark black, in this way, it can reduce the visual interference. The originally hidden progress bar below will be displayed. The user can adjust the progress bar to select the playback process of the video. If it is not operated for a long time, the progress bar will be hidden again, in order to highlight the characteristics of self-media communication. Offline Activity Center Module. The offline activity center module is a prototype designed module based on the map. It will display user-centric offline activities, click the corresponding icon in the map. The specific information of the corresponding event will be displayed. In order to enable users to use the offline activity center module more intuitively, the offline activity center only contains zoom gestures and tap gestures. Clicking the option symbol above will display the overview of the activity being performed here, and clicking again will enter the activity details page, and there is a button to join. After the user chooses to participate in the activity, it will automatically help the customer to plan the route so that the customer can participate in offline activities better and more conveniently. Metra Module. The Mechuang platform module is an important module for the selfmedia communication of traditional apparel apps. It is a self-media communication module with PGC as the core concept, for users with a certain professional basis. Users can not only create their own channels in this module, but also watch others’ Mechuang’s self-channels. They can also share Mechuang’s platform modules with the community test module. Every Mechuang artist (in Mechuang’s the user who publishes professional content on the platform module is called Meichuangjia). The related works released by other users can be liked, commented, forwarded and shared, and the screen content is larger, which attracts the attention of users. The button design on the Metra platform module page is also very user-friendly, which is very suitable for touch screen operation. There are quick function buttons for comment, forward, and like in the lower right corner to encourage users to participate in related topics.
3 Experiments on the Development and Construction of Traditional Apparel Customization App Under the Background of AI 3.1 Questionnaire Distribution and Collection Online Questionnaire Survey. A questionnaire was initiated on the Internet and forwarded to major online platforms such as WeChat/Weibo/Tieba for large-scale research.
Development and Construction of Traditional Apparel Customization App
527
Distribute Paper Questionnaires Offline. Distribute paper questionnaires in densely populated squares, campuses, and communities to count user needs. Finally, a total of 279 valid questionnaires were collected. 3.2 Statistics This article uses SPSS 22.0 software to count and analyze the results of the questionnaire, and conduct a t test. The t-test formula used in this article is as follows: t= t=
X−µ
(1)
σ √X n
X1 − X2 (n1 −1)S21 +(n2 −1)S22 1 ( n1 n1 +n2 −2
t=
(2) +
1 n2 )
d − µ0 √ sd / n
(3)
4 Experimental Analysis of the Development and Construction of Traditional Apparel Customization Apps Under the Background of AI 4.1 Age Distribution Statistics on the collected data, the age distribution is shown in Table 1. Table 1. Age distribution Number of people Proportion (%) Under 18
5
1.8
18–25 years old 168
60.2
26–50 years old
79
28.3
51–70 years old
27
9.7
It can be seen from Fig. 1 that in the returned questionnaire, the number of surveyed persons under the age of 18 was 5, accounting for 1.8%. The number of respondents aged 18 to 25 accounted for 60.2%, the number of respondents aged 26–50 accounted for 28.3%, and those aged 51–70 accounted for 9.7%. The target population is young people aged 18–25 and middle-aged people aged 26–50. There are relatively few young people under 18 and middle-aged and elderly people aged 51–70.
L. Wang 70
180 160 140 120 100 80 60 40 20 0
60 50 40 30 20 10
Proportion (%)
Number of people
528
0 Under 18
18-25 years old
26-50 years old
Generation
51-70 years old
250
Number of people
Proportion (%)
200
80 60
150
40
100
20
50 0
Proportion (%)
Number of people
Fig. 1. Age distribution
0 student
Lovers
Profession
Related store operators
Fig. 2. Occupation distribution
4.2 Occupation Distribution The occupational distribution is shown in Fig. 2. In terms of occupational distribution, students accounted for the largest number of students, accounting for 72.8% of the surveyed population, related shop operators accounted for 15.4%, and art lovers accounted for 11.8%. Most people claim that they haven’t used apps similar to traditional clothing. Only 20.69% of people claimed to have used apps with related attributes.
5 Conclusions Every technological revolution will promote the development of human civilization, so all walks of life, especially emerging industries, will have a lot of room for development and have a bright future. At the same time, it is also full of challenges. If you want to base yourself in the unpredictable market wave, it must have core competitiveness. The traditional clothing customization APP constructed in this article is a powerful carrier to inherit and carry forward the “traditional clothing culture” of China’s long history and culture. At present, although there are still some problems in the traditional clothing
Development and Construction of Traditional Apparel Customization App
529
customization APP, with the advancement of science and technology In line with people’s continuous pursuit of spiritual and cultural levels, traditional apparel customization apps under the background of AI will also be further improved.
References 1. Tien, J.M.: Toward the fourth industrial revolution on real-time customization. J. Syst. Sci. Syst. Eng. 29(2), 127–142 (2019). https://doi.org/10.1007/s11518-019-5433-9 2. Leng, H., Ni, S., et al.: Protection, development and utilization of industrial heritage under the background of transformation of exhausted cities: a case study of construction planning of Anyuan national mine park. J. Landscape Res. 11(05), 12–15 (2019) 3. Man, S.S., Chan, A., Alabdulkarim, S.: Quantification of risk perception: development and validation of the construction worker risk perception (CoWoRP) scale. J. Saf. Res. 71(Dec), 25–39 (2019) 4. Jensen, K.N., Pero, M., Nielsen, K., et al.: Applying and developing mass customization in construction industries – a multi case study. Int. J. Constr. Supply Chain Manag. 10(3), 141–171 (2020) 5. Oliveira, N., Cunha, J.: Co-design and footwear: breaking boundaries with online customization interfaces. Int. J. Vis. Des. 13(1), 1–26 (2019) 6. Sharma, S.A.: AI design suggestions for apparel retail counters. Int. J. Mod. Trends Sci. Technol. 06(9S), 242–244 (2020) 7. Maity, S.: Identifying opportunities for AI in the evolution of training and development practices. J. Manag. Dev. 38(8), 651–663 (2019) 8. Niveditha, A.S.: A study on optimistic need of AI in fashion retail industry. Int. J. Mod. Trends Sci. Technol. 06(9), 211–214 (2020) 9. Zhang, L., Ni, Q., Zhang, G., et al.: Random forests-enabled context detections for long-term evolution network for railway. IET Microw. Antennas Propag. 13(8), 1080–1086 (2019) 10. Man, P.: Custom tunic. Threads (201), 68–73 (2019) 11. Sewaka, J., Penelitian, B.L., Dan, P., et al.: Jurnal Etika Berbusana (April 2020). Jurnal Sewaka Bhakti 2(2), 41–51 (2020) 12. Jiang, Z., Guo, J., Zhang, X.: Fast custom apparel design and simulation for future demanddriven manufacturing. Int. J. Clothing Sci. Technol. 32(2), 255–270 (2019)
Application of AI Technology in Modern Dental Equipment Zongyuan Ji, Zhaohua Song, Zheng Lu, and Jianyou Zeng(B) School of Arts and Media, China University of Geosciences (Wuhan), Wuhan, Hubei, China
Abstract. Big data, deep learning and high-performance computing are pushing society into the era of data intelligence. The emergence of artificial intelligence and the development of the sharing economy have brought about a radical change in the traditional way of life in the past. The concept of “smart+” has also become more and more popular. The improvement of living standards has caused more and more serious dental problems. The pressure on dental hospitals has increased, and the upgrading of dental chairs is imminent. The purpose of this article is to analyze the application of AI technology in modern dental equipment for analysis and research, so as to expand its future development prospects. Keywords: AI · Dental equipment · Smart+ · Lifestyle · Sharing economy
1 Introduction Big data, deep learning and high-performance computing are pushing society into the era of data intelligence. In 2016, the artificial intelligence (AI) robot AlphaGo won the battle against the Go champion, becoming a landmark event for artificial intelligence to catch up with human intelligence. With the continuous development of AI technology, the implementation of national strategies such as “smart+” and “new infrastructure” provides new opportunities for the development of data intelligence. Nowadays, medical, transportation, finance, retail and many other industries are vying to use data intelligence to solve problems and seek new development. Although modern dental medical equipment has made significant progress compared with before, it still does not meet the needs of modern society. To this end, this article discusses the application prospects of AI technology in modern dental equipment, combined with the needs of modern society to split and re-plan basic dental care, combined with a warm space environment, allowing people to complete basic dental care under comfortable conditions.
2 Research Background 2.1 The Development History of Dental Chairs The first adjustable dental chair was invented by Josiah Flagg in 1790. It used a large wooden Windsor chair and modified it to fit an adjustable headrest. Forty years later, © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 530–536, 2022. https://doi.org/10.1007/978-981-16-5857-0_68
Application of AI Technology in Modern Dental Equipment
531
James Snell designed and manufactured the first fully reclining dental chair. This is an improvement, but it still sits on four legs. The pump chair combines the adjustable features of the Snell model and provides a foot pump that raises and lowers the patient. In 1867, British dentist Dr. James Beall Morrison built a chair that can be raised up to 3 feet. It allows the patient to tilt completely and can also tilt to the left and right [3]. In 1954, a group of dental seminars including Dr. Sanford S. Golden developed a recliner for patients, allowing the doctor to sit down during the operation, leading to the introduction of the itter Euphorian chair. As the first dental chair to undergo major changes in 50 years, it will not allow patients to lie down completely without their feet on the ground. John Naughton launched the Den-Tal-Ez (R) chair in 1958. This design ushered in the era of modern meditation, four-handed dentistry. This chair has an articulated seat and back, and is accepted as a standard by professionals. Newer models usually also have integrated electronic devices with memory functions and computer control. More and more chairs also have circuits that allow multiple devices to be integrated with single-leg controllers, which means fewer wires and hoses for coresponders and patients [2]. Typical new dental equipment has handpieces, sonic and ultrasonic cleaning equipment, intraoral cameras, curing lights, and other equipment integrated with these chairs. 2.2 Research on Data Intelligence and Its Application Trend The definition of data intelligence provides multiple perspectives. From a technical point of view, the industry represented by Baidu believes that data intelligence is the integration of big data and AI technology, mining and analyzing massive data, discovering valuable information and knowledge contained in the data, making the data intelligent, and then establishing The model achieves the following purposes: to find existing solutions to problems, and to achieve predictions of things, events, or phenomena. [4] From a management perspective, scholars believe that data intelligence is the use of predictive analysis techniques (such as big data mining, machine learning, and deep learning) to mine and analyze multi-source heterogeneous big data inside and outside actual application scenarios, and extract Valuable and actionable data, information or knowledge is used to improve the management and decision-making level of complex practical activities [5, 6]. From the perspective of knowledge discovery, scholars believe that data intelligence is to directly analyze knowledge in structured and unstructured data through algorithms to reflect intelligence, and its ultimate goal is to discover new knowledge [1]. The application purpose of data intelligence is to serve decision-making, including improving the level of decision-making, efficiency and stability, and replacing repeated decision-making [7]. It can be seen that no matter from the perspective of technology, management or knowledge discovery, data intelligence is finally presented as information and knowledge that assists the subject in making decisions. It is an emerging source of decision-making information generated by AI, “data + AI algorithm + “Hashing power + scenario” is the core paradigm for intelligent data generation. Data is the original material, algorithms provide computational thinking, computing power provides computational support, and scenarios provide demand traction [6–8]. Of course, the realization of data intelligence is also inseparable from the rules of expert knowledge. These rules are reflected in determining reliable information sources, selecting
532
Z. Ji et al.
appropriate AI algorithms, deploying appropriate computing capabilities, and finding appropriate application scenarios. With the normalization of large-scale data and the continuous and rapid improvement of computing power, the application research of data intelligence is becoming more and more diversified. Not only for key areas, such as infectious disease monitoring and early warning [9], disease diagnosis [10] and stock market analysis and forecasting [11], research is also conducted in general areas. Once these applied research are implemented, it means that the service target includes not only professional decision makers, but also the general public. It is foreseeable that the future data intelligence application will expand to more fields and a wider group of people, becoming a big boost to future life. This article mainly discusses applied research in the field of dentistry. 2.3 The Trend of Combining AI Technology with Dental Chairs With the rapid development of AI technology and the increasing improvement of people’s living standards, people’s lives are becoming more and more intelligent. Because of this, it can be seen that the future market will have higher and higher requirements for dental chairs. It is foreseeable that this product of dental chairs will inevitably transition from the low-end market to the high-end market, and the prospects are still very broad. The State Council issued the “New Generation of Artificial Intelligence Development Plan” to promote artificial intelligence (AI) as a national strategy, which will reach the world’s leading level in 2030. At present, AI has tried to be used in all aspects of diagnosis and treatment, and has achieved excellent results. This article mainly uses data intelligence methods to automatically analyze and process medical images, scan and detect tooth components, automatically control robotic arms, and integrate a variety of AI algorithms. Among them, medical image semantic segmentation technology can divide the oral cavity according to the shape, color, texture and other characteristics of the image. The condition of the diseased area is automatically extracted and processed (whether the teeth are neat, whether the teeth are missing, etc.), and the target detection can automatically detect the diseased area (bad teeth, stones, etc.) to achieve tooth detection. Then, the different components of the teeth are scanned and the teeth are removed through automatic control. Stones, etc., for automatic tooth care.
3 Investigation and Analysis 3.1 In-depth Interviews with Dental Chair Users In order to comprehensively analyze the problems in the use of the dental chair and the effects of user experience, 10 users were selected according to different groups of people to conduct in-depth user interviews. Through the in-depth interviews with several people, I learned that there is no some of the problems that have been discovered, let yourself get closer to understanding the user’s thinking. Overall survey summary: ➀ To be hygienic and safe, it must be able to save time. ➁ Simple, easy to operate, warm and professional. ➂ Do basic dental care and popularize dental knowledge.
Application of AI Technology in Modern Dental Equipment
533
Key points that can be designed: Combining the concept of “smart+” and sharing economy, refer to unmanned hospitals, unmanned clinics, semi-enclosed simple and warm spaces, simple dental chairs, and professional dental medical supplies. 3.2 Innovative Design Points of Products Start with in-depth interview research: Innovative design for the following three points. Dental chair shape design. According to the style of dental treatment. Space design. The warm atmosphere is extracted from the comparison with the high-end dental clinic. The semi-enclosed space uses light yellow and light orange as the theme. Gives a relaxing atmosphere, and the whole person is relaxed when you come in. Functional design. Combined with AI technology, the main functions are (test, appointment, basic care).
4 Product Design 4.1 Set the Target Group After in-depth investigation, questionnaire research, and market research to fully understand the user group of dental chairs and the relevant market background, I set the product to be designed for adults aged 18 to 40. 18 to 40 years old is the age group with high incidence of oral diseases. At this stage, oral health can be ensured through regular inspections and care to ensure good social and physical health and oral health. The backbone of adult formal society at this stage will have many social activities and important social values. Oral problems should not affect the happiness of life and normal social interaction. Therefore, I chose to set the target for adults between 18 and 40 crowd. 4.2 The Added Value of Dental Chairs Emotional aspect. ➀ Increase human-computer interaction: increase the patient’s right to choose. ➁ Personalized customization: self-selection of services and free matching of nursing items can be carried out. ➂ Cultivate user habits: increase user awareness and promote the severity of oral problems through product sharing. Product man-machine. ➀ Human-machine size design: The appearance size is in line with most human-machine sizes, and it is comfortable and relaxing. ➁ Safe and reliable materials: regular high temperature disinfection, environmentally safe materials, recyclable disposable products. Aesthetics. ➀ Vision: Reconstruct the shape of the popular dental comprehensive treatment chair, retain its basic nursing function, adopt space reorganization, warm and simple without losing the sense of science and technology. ➁ Hearing: No. ➂ Tactile: easy to use and comfortable, the host chooses a customized care package. ➃ Smell: automatic ventilation without peculiar smell. ➄ Taste: None. Product characteristics. ➀ Time: The current market lacks similar products, and the potential market demand is large. ➁ Market: At present, there are basically no smart
534
Z. Ji et al.
devices that share basic dental care in the domestic market. However, with the rapid development of AI technology, the concept of “smart+” has become more and more perfect, coupled with the support of national policies, and the market has huge potential. 4.3 Clearly Define the Product ➀ Product design purpose: Develop a good habit of regular check-ups and care to reduce the incidence of oral problems. ➁ User groups: Adults between 18 and 40 years old (more receptive to new things). ➂ Basic product size: The basic dental care room is shaped like a rectangle, and the overall space is about 3900 mm * 2400 mm * 2500 mm in length, width and height. ➃ Outstanding features of the product: Shared, warm, smart, hygienic, personalized design, AI + medical. ➄ Product semantic symbol: Interaction, warmth, sharing, individuality, simplicity. ➅ Product use environment setting: The product use environment is set in public places such as shopping malls.
5 Product Design Product details display (Fig. 1).
Fig. 1. Product detail presentation
Product size (the ratio of the model to the actual product is 1:10, unit: cm) (Fig. 2). Description of working principle. The robotic arm contains 4 degrees of freedom of rotation, folding, rotation, and rotation, and 1 degree of freedom of telescopic. Therefore, first, according to the spatial coordinate data given by the measuring unit, the trajectory of the movement of the robotic arm is planned. In the process of moving In the process, the sensors installed at each joint feed back the movement amount to the control unit in real time to compensate the movement trajectory, so as to complete precise positioning. The step of selecting the required services (washing, inspection) is to first scan the position of the calculus to form an oral coordinate system (according to the composition of the calculus and enamel), forming a large spatial coordinate system from the seat under the seat and the headrest position, The mechanical arm is positioned to the small oral coordinates through the large space coordinates, and then the bur is used to perform dental calculus removal care in the small oral coordinates.
Application of AI Technology in Modern Dental Equipment
535
Fig. 2. Product size
After each use of the treatment area, the internal disinfection process is automatically carried out. Use a disposable inspection kit for the inspection area (Fig. 3).
Fig. 3. Design notes
Design description. Based on solving the difficult and expensive pain points of seeing teeth, we chose to adopt the concept of “smart + medical” to explore the design of shared dental care rooms. The shared dental basic care room is designed to improve people’s happiness in life. Through sharing and the addition of AI + medical concepts, it can be opened 24 h a day to solve the situation that people don’t have time to see the teeth. In this era when
536
Z. Ji et al.
time is becoming more and more precious, customers can save time in queuing. Regular dental examination and care can also greatly reduce the pain caused by dental diseases. And with a warm color atmosphere to give people a feeling of relaxation, reduce the psychological resistance of treatment.
6 Research Limitations and Prospects This study also has limitations: ➀ The trust of data intelligent dental automatic care. ➁ Insufficient modeling design ➂ Insufficient research objectives and other issues. In future research, Fan Tao, the research object can be expanded, by adding experiments, making prototypes, and conducting experiments, and strive to obtain more accurate and universal research conclusions.
References 1. Zhao, X., Qiao, L., Ye, Y.: Cross-border expansion of library and information science for data intelligence and knowledge discovery—data-academic-creation integration theory. China Libr. Sci. J. 46(6), 16–25 (2020) 2. Wei, M.: Research on the application of semantic design method in industrial design. Mod. Ind. Econ. Inf. 7(16), 31–33 (2017) 3. Liu, L., Zhang, W., He, M., Han, Y.: Study on improvement of dental chair comfort based on ergonomics. Ind. Eng. 21(03), 100–108 (2018) 4. Clem, P.G., Rodriguez, M., Voigt, J.A., Ashley, C.S.: U.S. Patent 6,231,666 (2001) 5. Chen, Y., Chen, J.: Perceptual research in the design of dental comprehensive treatment machine. Ind. Des. (01), 68–69 (2018) 6. Wu, J., Liu, G., Wang, J., et al.: Data intelligence: trends and challenges. Syst. Eng. Theory Pract. 40(8), 2116–2149 (2020) 7. Hu, Z., Sun, Z., Ma, Y.: Study on human body comfort of dental chair with different pitch angles. Packaging Eng. 38(16), 108–112 (2017) 8. Tang, X., Du, Z., Zhai, X.: Research on competitive intelligence system model based on big data intelligence. Inf. Theory Pract. 41(11), 133–137, 160 (2018) 9. Serban, O., Thapen, N., Maginnis, B., et al.: Real-time processing of social media with SENTINEL: a syndromic surveillance system incorporating deep learning for health classification. Inf. Process. Manag. 56(3), 1166–1184 (2019) 10. Li, S.: Deep adversarial model for musculoskeletal quality evaluation. Inf. Process Manag. 57(1), 102146 (2020) 11. Maqsood, H., Mehmood, I., Maqsood, M., et al.: A local and global event sentiment based efficient stock exchange forecasting using deep learning. Int. J. Inf. Manag. 50, 432–451 (2020)
The Application of Artificial Intelligence Technology in the Field of Artistic Creation Sisi Feng(B) School of Arts, Shandong Management University, Jinan, Shandong, China
Abstract. The rapid development of science and technology has penetrated into all areas of life in all aspects. In recent years, in various artistic fields such as painting, literature, music, lighting, and architecture, artificial intelligence technology has shined, greatly improving the space for artistic creation, and this has also triggered people’s thinking about artificial intelligence in the field of artistic creation. This article adopts qualitative research methods and through a comprehensive analysis of artificial intelligence technology in the field of artistic creation, the research believes that in the field of artistic creation, the existence of artificial intelligence has two sides, and the human-computer collaboration in the field of artistic creation is art. One of the future development directions of creation is also the main trend of the times. Keywords: Artificial intelligence · Big data · Artistic creation
1 Introduction In recent years, artificial intelligence technology has been applied to various art fields such as music, painting, literature, architecture, lighting, and game art. At the same time, the application of artificial intelligence in the field of art has brought commercial value. People hold various competitions for artificial intelligence art creation. Paintings created by artificial intelligence are also auctioned at high prices. In addition, poems created by artificial intelligence are officially published. However, whether the work completed by artificial intelligence is artistic and original, and how humans should get along with artificial intelligence in the field of art have caused heated discussions.
2 The Rationality of the Development of Artificial Intelligence in the Field of Artistic Creation 2.1 Looking at the Relationship Between Art and Artificial Intelligence Technology from a Historical Perspective First of all, art and technology are inherently inseparable. First, we should not put art and technology on opposite sides, and innately resist the advancement of technology © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 537–543, 2022. https://doi.org/10.1007/978-981-16-5857-0_69
538
S. Feng
into the field of art. In fact, they are always complementary to each other. As Russell said: “The work of artists and scientific explorers are not opposites. They both explore the truth in their own different ways.” Second, from a historical perspective, the two concepts of technology and art have a “combination-division”. -The process of combining. In the early days, technology and art were the same concept; in modern times, art is more inclined to reflect creativity and ideological aspects, while technology gradually moves towards the interpretation of the natural world. As a result, technology and art are gradually distinguished; in the 18th century artists began to think about “industrial aesthetics” and “technical aesthetics”, and technology and art began to meet [1]. Therefore, a work of art has a material level that represents technology and a spiritual level that represents art. We cannot favor one another, separate technology from the other, and only appreciate its spirit. Secondly, the innovation of art is inseparable from the promotion of technology. First, technology has contributed to the formation of new art genres. In the 19th century, Impressionism emerged under the rule of classicist painting. Pissarro, Monet, Renoir and others overthrew traditional painting concepts and went outside to capture light and shadow. However, the formation of their painting concepts, styles, and techniques was inseparable from the travel easel. Inventions such as, pig brush, metal pen hoop, train, etc., are technological advancements that have helped the impressionist revolution and the continued development of the art field. Second, technology has opened up new areas of art. The invention of photography in the 19th century not only did not replace the traditional field of painting, but also opened up a new field of photographic art [2]. At the same time, it “promoted a new journey of painting towards abstractionism”, and promoted the change of traditional painting theories and concepts, the subsequent film art, television art, network art, etc. are also the same [3]. Therefore, the importance of technology to art is evident. In summary, as one of today’s advanced technologies, artificial intelligence has the legitimacy to enter the field of artistic creation. Not only that, artificial intelligence is not an invasion of the field of artistic creation, it is likely to bring new vitality to artistic creation. 2.2 Artificial Intelligence Technology in the Field of Artistic Creation Has Shined Artificial intelligence has penetrated into our lives from all aspects, but when artificial intelligence has been applied to life, people tend to ignore it as artificial intelligence. After the emergence of deep learning at the beginning of the 21st century, many people believed that only deep learning can be called artificial intelligence, but in fact, the broad definition of artificial intelligence is that it can do things that humans can do. For example, Meitu Xiuxiu’s AI beauty, AI old photo restoration and other functions are artificial intelligence. In addition, in the field of artistic creation, artificial intelligence has many advantages that humans cannot match [4]. First of all, the biggest advantage of artificial intelligence is efficiency. First, it is reflected in its learning efficiency. Artificial intelligence can use big data to learn hundreds of years of human artistic achievements in a short period of time, find its patterns and create creations, and will not forget them. For example, after learning thousands of modern poems over the past century, Microsoft Xiaobing acquired the ability to create
The Application of Artificial Intelligence Technology
539
modern poems. Second, it is reflected in its creative efficiency [5]. Artificial intelligence can find the best solution that it thinks in a short time, and there is no need to rest, and improving efficiency means saving time and labor costs. Second, the inspiration of artificial intelligence will not dry up. Many human artists will encounter a creative bottleneck period, but artificial intelligence will not. Artificial intelligence can constantly learn new data, form new creative styles, and finally create non-repetitive works. For example, Microsoft Xiaoice can continue to create Pattern patterns are used in the production of squares and clothing. Finally, artificial intelligence may have more surprising innovations. Although the current artificial intelligence has no subjective consciousness, it also means that it is not bound by human thinking. For example, Microsoft Xiaobing can draw scenes that are not available in the world, or can draw pictures based on a paragraph of text. This exclusive customization will bring users a sense of mystery and expectation. In addition, its collection of poems writes “the shadow of her tree flying into the sky”, “the sky” and “the shadow of the tree”. These inadvertent collisions may bring unexpected effects and bring new forms and appearances to art [6] (Fig. 1).
Visual performance of two-dimensional animation technology
Visual performance combined with digital technology and animation
Fusion performance of virtual reality (VR) and animated film
Fig. 1. The historical evolution of the visual performance of animated films
540
S. Feng
3 Analysis of the Limitations of Artificial Intelligence in the Field of Artistic Creation 3.1 From the Perspective of Art Ontology, Artificial Intelligence Art Creation The emergence of artificial intelligence has impacted people’s perception of “art creation with human beings as the main body”, and his works lack ideology and emotion. Art must not only have material existence, but more importantly, it must have the manifestation of people’s consciousness such as ideological spirit, and ideological spirit cannot be separated from the artist’s own characteristics and life experience [7]. However, at this stage, artificial intelligence has no emotions or thoughts. From the perspective of the process of artistic creation, although artificial intelligence has completed artistic communication, there is no process of artistic experience or even artistic conception. At the same time, this leads to the lack of artistic commonality between the works created by artificial intelligence and human beings. “A history of art is a history of human spiritual culture, and it is also a history of image cognition. It carries various rich thoughts and experiences of mankind in the past.” Artificial intelligence reflects the world rationally derived based on big data. Instead of using human temperature to experience the times. This also makes the works created by artificial intelligence lack the ability to empathize and resonate with human beings. Taking a step back, even if the work created by artificial intelligence gives the viewer some insight, this is the result of the viewer’s unilateral interpretation. If artistic creation and artistic aesthetics are regarded as the process of encoding-decoding, for the time being, artificial intelligence art creation does not have a process of encoding thoughts and perceptions. It is difficult for viewers to interpret the work from the perspectives of the author’s intention and the background of the times [8]. Look for a coincidence between yourself and your work. This brings about thinking about the value of artificial intelligence works. Although artificial intelligence paintings have been auctioned at high prices, “judging the value of an artwork should be determined by the combination of its artistic value, cultural value, historical value, and even the spiritual personality of the artist, rather than unilaterally given the answer by market value”. 3.2 Looking at Artificial Intelligence Art Creation from the Perspective of Algorithm Logic Creativity is an indispensable factor for artworks. However, at this stage artificial intelligence is based on big data and algorithms to complete tasks. This fact determines that creativity is a shortcoming of artificial intelligence. In “Ci Hai”, creation refers to “creating something unprecedented.” However, at present, artificial intelligence can only complete tasks that have a large amount of material and can be followed regularly. Similarly, the application of artificial intelligence in the field of artistic creation still has this shortcoming [9]. After all, it is also based on big data and algorithms, but it applies the GAN system, “by modifying the network’s goal to make it as far as possible from the established Style, while keeping as much as possible within the scope of the artwork.” Therefore, artificial intelligence can create works that have never appeared in human history, but are similar to human works of art. However, the artificial intelligence art
The Application of Artificial Intelligence Technology
541
creation under this algorithmic logic is more inclined to the combination and imitation of styles, and does not have the originality in the typical sense. Borden divides creativity into three types: combinatorial, exploratory, and transformational. Creativity in the strict sense is transformational, that is, creating works that have never been seen before [10]. However, artificial intelligence art creation is now a combination exploratory creation. Although it can generate works that are not available in the world, its works are just a combination of the works of the predecessors, or imitating the styles of the predecessors. In addition, artificial intelligence art creation is often limited to a certain style in a certain field. For example, impressionist artificial intelligence will not create classicist style paintings, and artificial intelligence who can write modern poetry will not write ancient poems. If you have to merge the various styles in this field, what will be produced is only a physical fusion, not a chemical reaction (Fig. 2).
The effect of AI technology on artistic creation
Different cate
4 3 2 1 -0.2
0
0.2 0.4 0.6 Three types of comparison
0.8
1
1.2
Fig. 2. The effect of AI technology on artistic creation
4 Prospects for the Future Trend of Artificial Intelligence in the Field of Art Creation 4.1 Human-Machine Collaboration in the Field of Art Creation is the Future Development Trend Human-computer collaboration is the future development trend in the field of artistic creation. From the previous two chapters, although the development of artificial intelligence in the field of art is reasonable, there are also various limitations. Therefore, artificial intelligence cannot completely replace humans in artistic creation, but it will definitely
542
S. Feng
open up a new field of art. Just like manual customization has not been replaced by factory assembly lines, painting has not been replaced by cameras, and live performances have not been replaced by televisions. Instead, they have promoted the development of art fields such as photography art and television art. The ultimate goal of artificial intelligence is not to replace humans, and the field of artificial intelligence art will also appear in the future. In the artistic creation of human-machine collaboration, artificial intelligence can use big data to collect hot spots of the times and synthesize unprecedented scenes to inspire creators. The works created by it can also be used as reference-creating non-repetitive patterns for clothing supplies. The design, non-repetitive melody is used for song writing, etc. 4.2 Artificial Intelligence Will Exist as a Creative Tool in the Field of Art The better development direction of artificial intelligence may exist as a creative tool. First, there are certain problems with the works independently created by artificial intelligence. For example, although Microsoft Xiaoice can paint based on a paragraph of text, the quality of the works is uneven, and it is difficult for people to have a sense of artistic commonality. There are also problems of originality to certainly controversial. As far as current art research is concerned, human consciousness is still where the soul of art lies. Second, even if super-artificial intelligence is developed in the future, how to make its artwork social is still a big problem. After all, art creation originated from the shortness of human life and the desire to leave a mark, and the life of artificial intelligence is eternal. It will never be able to appreciate this emotion. Third, from the current point of view, the commercial value of artificial intelligence art creation is far greater than the artistic value. Artificial intelligence exists as a creative tool to maximize benefits. Although artificial intelligence has various controversies in the field of art, it is readily accepted by people in the commercial field. After all, in the commercial field, there is no need to entangle the encoding process, just to ensure that the decoding process proceeds smoothly. Artificial intelligence can create uninterrupted, and even create exclusive works for multiple users at the same time, which can fully meet people’s demand for low-demand mass products.
5 Conclusion With the rapid development of artificial intelligence technology, in order to speed up rethinking and re-research in the field of art, artists need to rethink and clarify the core of art, and while transferring part of the art space to the public, they find their core values and the true meaning of art. Before artificial intelligence entered the field of art, no one questioned that the subject of artistic creation was human beings, but now we may have to rethink the connotation of artistic creation, where is the boundary between artificial intelligence artistic creation and human artistic creation, and what is the meaning of human artistic creation in the future, etc. By clarifying these issues, humans can coexist more harmoniously with artificial intelligence in the future. In short, understanding the core value of human creation, improving the aesthetic level of the public, and actively exploring how to appreciate artificial intelligence art works can better deal with issues such as lowering the threshold of art.
The Application of Artificial Intelligence Technology
543
References 1. Yang, Z., Xiang, D., Cheng, Y.: VR panorama mosaic algorithm based on particle swarm optimization and mutual information. IEEE Access PP(99), 1 (2020) 2. Lee, S.-H., Lee, S.-J.: Development of remote automatic panorama VR imaging rig systems using smartphones. Clust. Comput. 21(1), 1–11 (2017) 3. Lanier, J., Euchner, J.: What has gone wrong with the internet, and how we can fix it: an interview with Jaron Lanier. Res. Technol. Manag. 62(3), 13–20 (2019) 4. Theocharis, S., Giaginis, C., Alexandrou, P., et al.: Evaluation of cannabinoid CB1 and CB2 receptors expression in mobile tongue squamous cell carcinoma: associations with clinicopathological parameters and patients’ survival. Tumour Biol. 37(3), 3647–3656 (2016) 5. Oyewunmi, O.A., Taleb, A.I., Haslam, A.J., et al.: On the use of SAFT-VR Mie for assessing large-glide fluorocarbon working-fluid mixtures in organic Rankine cycles. Appl. Energy 163(Feb), 263–282 (2016) 6. Fuji, K.: Esoteric symbolism in animated film storytelling. Chin. Semiotic Stud. 14(3), 347– 370 (2018) 7. Eppler, E.: The reincarnation of animated film. Slovenske Divadlo 65(4), 331–344 (2018) 8. Hsu, F.C., Hsiang, T.W.: Factors affecting color discrepancies of animated film characters. J. Interdisc. Math. 21(2), 279–286 (2018) 9. Pollard, L.: ‘When Marnie was there’ an animated film, Studio Ghibli. Director: Hiromasa Yonebayashi, film written by Keiko Niwa, Hiromasa Yonebayashi Masashi Ando, 2014, released in UK 2016. Infant Obs. 19(3), 255–258 (2016) 10. Shibo, S.: Combination and research of two-dimensional animated short films and H5 viral marketing. IPPTA: Q. J. Indian Pulp Paper Tech. Assoc. 30(6), 166–173 (2018)
Application of BIM+VR+UAV Multi-associated Bridge Smart Operation and Maintenance Yu Peng1,2 , Yangjun Xiao1(B) , Zheng Li1 , Tao Hu1 , and Juan Wen1 1 Chongqing Chengtou Road and Bridge Administration Co., Ltd., Chongqing 400060, China 2 School of Civil Engineering, Chongqing University, Chongqing 400045, China
Abstract. In recent years, my country has vigorously promoted bridge smart technology, but the technical integration of urban bridges is not ideal. As people continue to research and in-depth bridge smart operation and maintenance technology, more and more bridge designers combine advanced technology with bridge project engineering. Therefore, using BIM, VR and UAV technologies to implement multi-relevant bridge intelligent operation and maintenance has become an important topic for academic and industry research. The purpose of this article is to study the application of BIM+VR+UAV multi-relevant bridge intelligent operation and maintenance. This research is from the perspective of multi-related bridge operation and maintenance engineering. First, through literature research and systematic learning of BIM, VR and UAV software, using the characteristics of BIM, VR, and UAV smart technology that have complementary advantages after reasonable correlation, through the comparison of BIM, VR, and UAV. UAV’s technical theory analysis proposes a method for the simultaneous use of three technologies in multiple associations. This research fully explores the characteristics and advantages of BIM, VR, and UAV technology, application value and the content and methods of cost management theory and schedule management theory, and provide a theoretical basis for the research content. Based on summarizing the theories and application practices of BIM, VR, and UAV technology at home and abroad, this research integrates the existing BIM model software system to integrate the existing bridge construction theory with BIM, VR, and UAV technology, and proposes based on BIM’s engineering virtual construction technology framework and application process. The experimental results show that the crack width in area A detected by manual crack width detection is 1.0 mm, the crack width in area B is 0.9 mm, the crack width in area C is 1.3 mm, and the crack width in area D is 1.6 mm, which is the largest difference from the actual crack. The minimum difference is 0.24 mm, and the minimum difference is 0.4 mm, and the maximum difference between the measured value of the drone and the actual crack value is 0.01 mm, indicating that the drone is feasible and more accurate in identifying the width of bridge cracks. Keywords: Multiple associations · Bridge engineering · Smart operation and maintenance · BIM technology · Virtual technology
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 544–551, 2022. https://doi.org/10.1007/978-981-16-5857-0_70
Application of BIM+VR+UAV Multi-associated Bridge Smart Operation
545
1 Introduction In recent years, with the advancement of science and technology and the improvement of the level of informatization, aviation, shipbuilding, automobile and other manufacturing industries, with the help of advanced production processes and the application of information technology, have greatly improved production efficiency, leading the improvement of the industry level and industrial upgrading [1, 2]. With the rapid economic development of my country, the improvement of the level of social industrialization, and the substantial increase in transportation volume, my country is also turning from a bridge power to a bridge power, and comprehensively promotes the accelerated development of smart bridges [3, 4]. But at the same time, there are also some problems in the operation and maintenance of bridges. Due to various emergencies and uncertain factors such as earthquakes, floods and other natural disasters, the difficulty of bridge maintenance and management has been exacerbated [5, 6]. Therefore, it is urgent to improve the management level of bridge operation and maintenance. How to use the concept of information and combine advanced BIM, VR, UAV technology to establish a multi-related bridge intelligent operation and maintenance is the current main research direction [7, 8]. Many scholars have conducted in-depth discussions on the application of multiassociated bridge smart operation and maintenance, and have achieved good results. For example, Ding K, etc. imported the real-time data transmitted by the temperature sensor into the BIM model and made it clear. The visualization is presented to BIM users. Information can also be saved by IFC as an intermediate file, and can be used by other tools [9, 10]. Koss Akowski PG based on the analysis of the IFC standard extension method, carried out the BIM technology-based bridge detection disease and structural health monitoring information extension and visualization research, and proposed the “surface point method” and “three-dimensional disease model parameterization” modeling methods to describe concrete Crack [11, 12]. Based on the characteristics of BIM, VR, UAV-related software and actual engineering construction requirements, this research first, combined with the theoretical exploration of BIM+VR+UAV multi-associated bridge intelligent operation and maintenance application, builds a system framework for the application of BIM virtual construction technology. This research deeply analyzes the technical characteristics of BIM, VR, and UAV, proposes the relevance between the three technologies of BIM+VR+UAV, and explores how to use the BIM platform to manage the operation and maintenance of bridge projects. At the same time, it analyzes the technical characteristics of the new technology after multiple correlations, establishes the key technical methods of the new technology, and constructs the technical process to realize multiple correlations. Finally, this research solves the problem of specific application points of bridge operation and maintenance of multiple related technologies by analyzing the content of traditional bridge operation and maintenance work. The multi-association technology and application points are verified through field tests, and the test results are compared with the traditional operation and maintenance effect and economy.
546
Y. Peng et al.
2 Application of BIM+VR+UAV Multi-associated Bridge Smart Operation and Maintenance 2.1 Traditional Operation and Maintenance Mode of Bridge Engineering The basic process of bridge engineering operation and maintenance management under the traditional mode is: the design institute completes the bridge structure design and delivers the construction blueprint of the construction unit. The construction unit carries out a series of constructions according to the construction blueprint, and records the progress, cost and construction plan of the bridge project. After the completion and acceptance of the bridge project, it will be delivered to the operation management unit for use and management. Since most bridge projects are managed by three parties in the design, construction, and operation and maintenance, and in the intermediate delivery process, the information and materials of the bridge will inevitably cause a series of losses and damages. Since there is no unified storage platform for data storage in the bridge operation and maintenance management based on the traditional model, these data are likely to be permanently lost, making the strengthening and expansion of the bridge in the future unfounded. 2.2 Advantages of BIM+VR+UAV Multi-related Bridge Smart Operation and Maintenance Application (1) Real-time monitoring and early warning Intelligent bridge operation and maintenance management based on the multiple associations of BIM+VR+UAV can monitor the stress and strain of the bridge structure in real time. If the monitored stress-strain value exceeds the limit warning value, the system will automatically issue an early warning report; manual inspections and maintenance are more scientific and technological. Equipped with wireless mobile devices for bridge inspection and maintenance personnel to record and upload bridge diseases through bridge maintenance applications. (2) Integrated information database The traditional way of calculating engineering quantities often wastes a lot of time, and BIM technology can effectively solve this problem. In the process of modeling, the relevant designers have already connected the established components with their corresponding attributes. After the model is established, the three-dimensional model engineering quantity is automatically generated according to the calculation rules designed in advance in the software. Traditional calculation efficiency is greatly improved. And this kind of computer-generated data has more optimistic results in accuracy than manual calculations, avoiding errors caused by human errors, and is more objective and accurate. The collaborative platform BIM+VR+UAV multi-relevant bridge project can solve the stability and global sharing issues between distributed data and engine data with different structures, and support engine management information, manage and share unique data sources. Each dynamic project participant in the project life cycle uses a single data source to ensure the accuracy and stability of the data. The problem of “information separation” between systems that originally appeared on the basis of information exchange has been resolved.
Application of BIM+VR+UAV Multi-associated Bridge Smart Operation
547
(3) Reduce operation and maintenance costs Use the detailed information of the project life cycle and different levels of information, processes and resources to build a complete data model project. The construction project model can be shared by everyone involved in the project. Different professional teams achieve sustainable development together by reducing assets and reducing costs. In order to promote comprehensive building life cycle management, evaluate the engineering performance, quality and safety of each step of the project area, so as to plan and adjust the cost, and further analyze, predict and control the total cost of the project. (4) Accurate plan to speed up settlement In traditional cost management, refined management is always difficult to achieve. The main reason is the problem of resource planning. In the traditional management and control mode, most engineering data sources are based on experience, and there is no way to obtain them quickly and accurately. The emergence of BIM has just solved this kind of problem. The acquisition of information resources has become more efficient and rapid with the help of BIM technology. This high-quality information source also provides more accurate guarantees for the formulation of resource plans and quantifies the supply of resources, thereby wasting resources in the construction process with complex variables. The phenomenon of gradual reduction, realizing quota requisition, provides a powerful control guarantee for resource consumption, not only that, but also reduces the pressure on logistics and inventory reserves. (5) Speed up payment of progress payment and settlement of completion In a sense, it is to reduce time costs. The pre-simulation of BIM technology has a certain control effect on the appearance of engineering changes in the construction process to a certain extent. The building information model also contains the most comprehensive information data, avoiding disputes and uncertainties caused by unclear information data. 2.3 BIM Modeling and Stress Monitoring Information Visualization Display Algorithm Based on the Aft Algorithm of Pre-arranged Points The accuracy of finite element calculation and analysis has a great relationship with the shape of triangular meshes. Therefore, in order to ensure that the divided triangular elements can meet the requirements of relatively ideal calculation accuracy, the rule of arrangement should be set in advance, and then according to the rule. Equation (1) is the formula for evaluating the quality of triangular elements in the AFT algorithm: λ=
AB2
ABC + BC2 + AC2
(1)
Among them, λ is the state coefficient for evaluating the quality of the triangle, and ABC is the area of the triangle. The area of the triangle, AB, BC, and AC respectively represent the length of the three sides of the triangle. In the AFT algorithm, an equilateral triangle has the best quality.
548
Y. Peng et al.
In the AFT algorithm, the points can be arranged in advance according to the following formula to ensure that the generated triangular elements have better quality. Xmin + l/2 + j × l(i = 2k + 1, i = 0, 1, 2, ...N ) (2) Xij = Xmin + j × l(i = 2k, j = 0, 1, 2, ...M ) Among them, Xmin represents the coordinate extreme value of the node in the target area, and M, N respectively represent the number of columns and rows of the node array, and l represent the side length of the triangular unit.
3 Application of BIM+VR+UAV Multi-Related Bridge Smart Operation and Maintenance 3.1 Research Methods (1) Research methods on key technologies of multi-linked BIM+VR+UAV By consulting related literature, and analyzing the equipment functions of BIM, VR, and UAV, the method of “BIM+VR” and “VR+UAV” one by one correlation analysis is adopted to construct the correlation point of the three technologies. (2) Research methods for bridge operation and maintenance application points of multirelevant technologies Adopt the methods of research interviews and data review to understand the scope of implementation in specific bridge operation and maintenance, and sort out the points of integration between the operation and maintenance content and the application of multiple related technologies. (3) Research methods for the feasibility of the project Using the method of experimental verification, test the operation effect of the new BIM+VR+UAV multi-association technology in the bridge operation and maintenance, analyze the operation effect, and provide specific measures for optimization.
4 Data Analysis of Research on BIM+VR+UAV Multi-relevant Bridge Smart Operation and Maintenance Application 4.1 Data Analysis of Bridge Crack Width Identification in Bridge Intelligent Operation and Maintenance of BIM+VR+UAV Multi-association In order to verify the feasibility of this research in the identification and location of bridge cracks, and to analyze its accuracy, a research on the identification of cracks in the airborne imaging of UAVs was carried out. First, the traditional manual inspection of the crack widths in the A, B, C and D areas of the bridge is carried out to obtain a report, and then the UAV is used to observe and locate the cracks. The experimental data is shown in Table 1. It can be seen from Fig. 1 that the crack width in area A detected by manual crack width detection is 1.0 mm, the crack width in area B is 0.9 mm, the crack width in area
Application of BIM+VR+UAV Multi-associated Bridge Smart Operation
549
Table 1. Comparison between UAV imaging and manual detection of bridge crack width (unit: mm) Area
Crack width
Manual inspection
Drone detection
Area A
1.24
1.00
1.25
Area B
0.95
0.9
0.95
Area C
1.24
1.3
1.25
Area D
1.46
1.5
1.46
Area E
1.66
1.6
1.66
Crack width
Manual inspection
Drone detection
Unit: mm
Area A
Area B
Area C
Area D
Area E
Area Fig. 1. Comparison between UAV imaging and manual detection of bridge crack width (unit: mm)
C is 1.3 mm, and the crack width in area D is 1.6 mm, which is the largest difference from the actual crack. The value is 0.24 mm, the minimum difference is 0.4 mm, and the maximum difference between the measured value of the drone and the actual crack value is 0.01 mm, indicating that the drone is feasible and more accurate in identifying the width of bridge cracks. 4.2 BIM+VR+UAV Multi-associated Bridge Intelligent Operation and Maintenance of Bridge Health Monitoring Data Analysis BIM+VR+UAV multi-associated bridge intelligent operation and maintenance can realize the function of self-diagnosis of the detected data in the bridge health monitoring.
550
Y. Peng et al.
In order to verify its feasibility, take stress data monitoring as an example. The detected stress data are shown in Table 2. As shown, within the range of the warning value, this technology can realize the self-diagnosis of detection data and the function of real-time warning. Table 2. Stress test data Area
Detection parameters
Parameter value
A
Inclination X
10
11.3
No
A
Inclination Y
−6
−3.7
No
B
Inclination X
11
11.3
No
B
Inclination Y
−6
−3.7
No
C
Inclination X
10
11.3
No
C
Inclination Y
−7
−3.7
No
Inclination X
Warning value X
Inclination Y
Warning value Y
15
11.3
11.3 10
11
10
Parameter value
Warning value
Whether to warn
11.3 10
5 0 A -5
-10
-6
-3.7
B
-3.7
-6
C
-3.7
-7
Area
Fig. 2. Stress test data
As shown in Fig. 2, if the detection value is within the warning range, there will be no warning. If the pre-warning value is exceeded, an alarm will be issued. It can be seen that the technology studied in this paper avoids invalid interference information due to the complexity and randomness of the data, so as to ensure the accuracy of the bridge health detection results. At the same time, after the obtained detection data is stored in the background database, the early warning value is set according to industry standards.
Application of BIM+VR+UAV Multi-associated Bridge Smart Operation
551
When the bridge component is damaged, the bridge performance is degraded and other problems, the system will send an alarm message to the computer to realize the real-time alarm function.
5 Conclusions This research provides a multi-relevant intelligent bridge operation and maintenance application, involving the field of bridge operation and maintenance. This kind of multirelevant intelligent bridge operation and maintenance application is based on BIM, VR and UAV technology to propose a treatment plan for bridge operation and maintenance. It is used for rapid bridge modeling, construction management, durability monitoring, cost analysis and forecasting, and bridge inspections. It has multiple functions. Through the combined use of BIM, VR and UAV technologies, the efficiency of bridge operation and maintenance is greatly improved. At the same time, areas that cannot be monitored by monitoring equipment and inspectors can also be monitored by drones.
References 1. Zhang, X., Zhang, M., Hu, Q.: Research on the application of artificial intelligence in operation and maintenance for power equipment. In: IOP Conference Series Earth and Environmental Science, vol. 617, p. 012001 (2020) 2. Peng, Y., Liu, X., Li, M., et al.: Sensing network security prevention measures of BIM smart operation and maintenance system. Comput. Commun. 161 (2020) 3. Matarazzo, T.J., et al.: Crowdsensing framework for monitoring bridge vibrations using moving smartphones. Proc. IEEE 106(4), 577–593 (2018) 4. Wit, G., Adamczak, A., Krampikowska, A.: Bridge management system within the strategic roads as an element of smart city. In: IOP Conference Series: Earth and Environmental Science, vol. 214, no. 1, p. 012054 (8pp) (2019) 5. Muhammad, A.N., Aseere, A.M., Chiroma, H., Shah, H., Gital, A.Y., Hashem, I.A.T.: Deep learning application in smart cities: recent development, taxonomy, challenges and research prospects. Neural Comput. Appl. 33(7), 2973–3009 (2020) 6. Elhattab, A., Uddin, N., Obrien, E.J.: Extraction of bridge fundamental frequencies utilizing a smartphone MEMS accelerometer. Sensors 19(14), 3143 (2019) 7. Zhu, L., Zhou, Q., et al.: Identification and application of six-component aerodynamic admittance functions of a closed-box bridge deck. J. Wind Eng. Ind. Aerodyn. J. Int. Assoc. Wind Eng. 172, 268–279 (2018) 8. Zhao, P., He, X.: Research on dynamic data monitoring of marine bridge steel structure building information based on BIM model. Arab. J. Geosci. 14(4), 1–9 (2021). https://doi. org/10.1007/s12517-021-06601-w 9. Ding, K., Shi, H., Hui, J., et al.: Smart steel bridge construction enabled by BIM and Internet of Things in industry 4.0: a framework 1–5 (2018) 10. Integrating BIM and IoT for smart bridge management. In: IOP Conference Series: Earth and Environmental Science, vol. 371, no. 2, p. 022034 (10pp) (2019) 11. Kossakowski, P.G.: Recent advances in bridge engineering – application of steel sheet piles as durable structural elements in integral bridges. In: IOP Conference Series: Materials Science and Engineering, vol. 507, no. 1, p. 012003 (5pp) (2019) 12. Liu, Y., Xin, H., Liu, Y.: Experimental and analytical study on tensile performance of perfobond connector in bridge engineering application. Structures 29(5), 714–729 (2021)
Intelligent Dispatching Logistics Warehouse System Method Based on RFID Radio Frequency Data Processing Technology Zhe Song(B) Shandong Management University, Jinan 250357, Shandong, China
Abstract. With the rapid development of the Internet of Things, the application of RFID radio frequency data processing technology has become more extensive, and the advantages have become more prominent. With the increase of warehousing management business, the requirements for the timeliness and accuracy of warehousing management of enterprises are increasing, and the existing warehousing management model has been unable to meet the requirements of logistics management. The purpose of this article is to study the intelligent dispatching logistics warehouse system method based on RFID radio frequency data processing technology. This article analyzes the business process of the intelligent dispatch logistics warehouse system, researches and combs the business process of the RFID warehousing logistics system under the Internet of Things environment, combs and reengineers the warehousing logistics management business based on the Internet of Things RFID technology, and joins the container Internet of Things gateway matching, many function points such as real-time management of cargo positions. Researched and improved the service form of RFID technology in warehousing logistics management business, and optimized and designed core modules such as intelligent cargo location management and supplier management. Using RFID technology, a stable and reliable warehousing logistics management business management system suitable for the relevant pipes of the Internet of Things is realized, which helps warehousing and logistics enterprises to transform into the Internet and enhance their competitiveness. Experimental research results show that: in terms of performance, the operation response time of exported data is the longest, 8.02 s, and the response time of other operations is less than 5 s. It can be seen from this that it is still necessary to strengthen the number of concurrent users of the system, operation response time, and the memory usage and CPU usage of the application server and database server. Keywords: RFID technology · Logistics management · Intelligent logistics · Warehouse system
1 Introduction Traditional warehousing and logistics companies mainly use barcode recognition technology. Due to technical limitations, the collection of cargo information still requires © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 552–561, 2022. https://doi.org/10.1007/978-981-16-5857-0_71
Intelligent Dispatching Logistics Warehouse System Method Based on RFID
553
manual contact, automation, and the efficiency of cargo information management is very low. This model can no longer meet the existing high-speed logistics system [1, 2]. The implementation of the system enables enterprises to use information tools for warehouse management, to ensure the accuracy of data, and to reduce the rate of operational errors [3, 4]. In order to be able to adapt to the ever-increasing warehousing business needs, adapt the new system to the new model, and improve the timeliness and accuracy of logistics warehousing [5, 6]. It is necessary to solve the problems existing in the process of warehouse management, which can reduce the cost of enterprise warehousing management, improve customer satisfaction with logistics services, and improve the efficiency of warehouse management [7, 8]. With the help of RFID technology, enterprises can realize the whole process and comprehensive management of goods through scientific coding and modern RFID electronic tags from the beginning of storage [9, 10]. In the research on the intelligent dispatching logistics warehouse system method based on RFID radio frequency data processing technology, many domestic and foreign scholars have studied it, and have achieved good results. Xiang proposed a variety of scheduling optimization methods to improve production quality and reduce production quality. Production cost and inventory reduction have been applied in the actual production process and achieved good results [11]. Costa designed and implemented the IDEF0 functional model of the flexible manufacturing execution system according to the production situation of the flexible manufacturing enterprise’s workshop, proposed a two-layer framework architecture to meet the functional requirements, established a multi-agent model, and realized dynamic scheduling [12]. This article analyzes the business process of the intelligent dispatch logistics warehouse system, researches and combs the business process of the RFID warehousing logistics system under the Internet of Things environment, combs and reengineers the warehousing logistics management business based on the Internet of Things RFID technology, and joins the container Internet of Things gateway matching, Many function points such as real-time management of cargo positions. Researched and improved the service form of RFID technology in warehousing logistics management business, and optimized and designed core modules such as intelligent cargo location management and supplier management. Using RFID technology, a stable and reliable warehousing logistics management business management system suitable for the relevant pipes of the Internet of Things is realized, which helps warehousing and logistics enterprises to transform into the Internet and enhance their competitiveness.
2 Research on the Method of Intelligent Dispatching Logistics Warehouse System Based on RFID Radio Frequency Data Processing Technology 2.1 Business Process of Intelligent Scheduling Logistics Warehouse System (1) Analysis of overall business process Electronic tags are the core part of RFID technology. The electronic tags must be made and placed before materials are put into the warehouse. The materials are scanned by handheld terminals when they are put into the warehouse. The materials
554
Z. Song
enter the warehouse and change the inventory status in the system in real time. The inventory uses wireless reading. The writer can automatically obtain the specific location of the material. When the material needs to be collected, the demand department needs to initiate and review the application. After the review is passed, the outbound inspection and outbound operations are completed. The forklift can automatically pick up the goods and complete the outbound. During operation, when leaving the warehouse, the outgoing information is automatically read through the reader, and the inventory information is changed in real time. The whole process, through the mobile reading and writing equipment and wireless terminals inside the warehouse, can understand the incoming and outgoing conditions of materials in real time, you can get the inventory of materials in real time, and realize the automation of the whole process of warehouse management. (2) Analysis of business process of warehousing management When the warehouse is insufficient for a certain material inventory, the business requirements department will initiate material procurement requirements, formulate an inbound demand form (plan), review by relevant departments, and if the demand is reasonable, the review is passed, and the purchasing personnel will purchase materials according to the demand form. If the warehousing demand is unreasonable, it will be rejected and the plan will be re-made. Finally, determine the quantity of the goods. If the actual purchased goods are inconsistent with the purchase demand order or the quantity is incorrect, you can refuse to enter the warehouse and the process ends. When the quantity is consistent and the information such as the model specification is consistent, enter the purchase material invoice information, and generate the material entry. The warehouse task is to formulate electronic labels and print them, paste them on the materials, generate materials inventory and put the goods into the inventory in a specific location. After entering the warehouse, obtain the material information through the wireless reading and writing equipment in the warehouse or the handheld reading and writing equipment. (3) Analysis of the outbound management business process When materials are out of the warehouse, the business demand department must first initiate the material requisition or outbound demand, write out the outbound demand form or demand plan, and submit it to the relevant leaders of the department for review. After the review is passed, it will be carried out according to the outbound strategy formulated by the warehouse manager. Choose, determine whether to use the first-in first-out or last-in-last-out exit strategy. After determining, perform the exit operation, first determine the location of the material for picking, find the material through the system, and use the material electronic tag reader installed in the warehouse Determine the location of the materials to facilitate material picking. The picker scans the outbound materials through the PDA handheld terminal to further check and confirm the accuracy of the outbound materials. If the materials are wrong, return the materials and pick them again. If the materials are confirmed to be correct, the goods Outbound, the system records the outbound records, and the inventory information is updated in real time. After the outbound, the goods need to be loaded intact and shipped out. Once the goods are damaged, the damaged materials need to be processed.
Intelligent Dispatching Logistics Warehouse System Method Based on RFID
555
(4) Analysis of business process of moving library management Warehouse transfer management is the process of transferring materials from the original location to another new location. First, the warehouse manager initiates a warehouse transfer application, formulates and fills in a warehouse transfer demand form, and the warehouse management leader reviews the demand form. After the review is passed, The picker is allowed to move the warehouse, otherwise the request for moving the warehouse is rejected. The moving process must confirm that there are goods in the required position of the moving warehouse, and at the same time, confirm that the moving destination has a location to place the materials to be moved. After confirming the infinite card reader in the warehouse automatically recognizes the location change of materials, and records the movement of the warehouse on the PC side, and also records the movement of the warehouse on the PDA side. 2.2 System Function Requirements (1) Analysis of basic information management needs Through business research and analysis of enterprise warehousing management, the normal operation of warehousing management requires the support of basic information, which mainly includes basic information such as customers, goods, locations, and employees for maintenance. Customer information management can realize customer information Add, maintain, and activate/deactivate the status of customer contracts. After the cargo information is maintained in advance, the basic information can be called directly during the warehousing or outgoing operation, without having to add it repeatedly. At the same time, warehouse location maintenance is carried out in the system, which is convenient for calling in, out of warehouse and moving warehouse. It mainly realizes the functions of creating, modifying, deleting, querying, exporting and importing warehouse location information. Employee information management is mainly to maintain the internal employee information of the enterprise. When the system is initialized, basic information such as employee names and permissions are imported in batches. After that, new employees can perform operations such as adding, modifying, deleting, and querying. (2) Demand analysis of inventory management Inventory management is to provide companies with a platform to understand the current storage situation of goods in the warehouse in real time, the situation of each goods storage location, and the real-time understanding of the situation of outbound, inbound and relocated. Inventory query provides fuzzy query functions based on cargo number, cargo location, customer name, etc., and can export the query results, and streamline query realizes the receipt, delivery and transfer status of the goods according to the goods. (3) Analysis of system requirements The system setting function is mainly to carry out basic setting functions for the system, including bar code setting, authority setting, interface setting and backup setting. Barcode setting is to set and print the electronic labels required by the system. The electronic labels correspond to the goods numbers one by one to ensure that each goods has a unique identification. Permission setting is mainly to assign
556
Z. Song
permissions to employees’ positions, each position is defined as a permission group, so that only the permissions need to be assigned to the position, and the employees under this position have the permission to perform corresponding operations, add, modify and delete permission groups. The interface setting is to complete the setting and maintenance of the background picture. The backup setting implements data backup and restoration, etc. 2.3 System Design (1) Overall system architecture design The development platform of the intelligent warehouse management system adopts the J2EE platform, the development language is JAVA, the SpringMVC architecture is used, and the database is a set of intelligent warehouse management system based on the B/S model of the three-tier architecture of the Oracle database. First of all, the display layer is mainly operated by business departments, warehouse administrators, pickers and other roles. They apply for various businesses through the warehouse management system and PDA terminals. At the same time, the results of each request are displayed through the display layer, in accordance with the established mode of the system Show it to the operating user. The application layer is mainly the application management part. It receives the system function operation request initiated by the display layer. After the application layer is processed, it requests the data layer to process data, and returns to the application layer and finally displays it to the operating user at the display layer to complete the request, processing and feedback. The data layer is mainly used to store the data required by the system, including employee data information, goods, demand orders, storage locations, customer information, barcode information, and inventory information. (2) System technical architecture design The entire technical architecture of the RFID-based intelligent warehouse management system is divided into three parts, which are the application software layer of the intelligent warehouse system, the RFID middleware layer and the RFID hardware layer. The specific work flow is that in the warehouse, there are RFID unlimited readers in different locations, which can read the electronic tags to obtain the basic information of the goods and the contents of the cargo location information, and feed back to the RFID middleware through the network, which is carried out by the middleware. Cargo information is processed comprehensively, and the acquired information is finally sent to the intelligent warehouse management system. After the application system obtains the information, it will update the data of the inbound, outbound, and relocated links in real time according to the nature of the information. (3) System network deployment structure design The RFID-based intelligent warehouse management system is mainly B/S architecture software. The system application server and database server are deployed in the internal workstations of the enterprise. When accessing the server, it needs to be accessed through the firewall. The system is in the warehouse in, out of the warehouse and the inventory part. All are equipped with operating computers, and
Intelligent Dispatching Logistics Warehouse System Method Based on RFID
557
the computers in each link are dedicated. When entering the warehouse, the operation is carried out through the warehousing computer. It is equipped with an RFID printer, RFID handheld terminal and a labeling machine to complete the warehousing operation and when leaving the warehouse, at the outbound location, there are outbound computers, wireless networks, RFID forklifts and RFID handheld terminals. After the outbound is completed, the inventory is deployed with RFID shelves, inventory computers, wireless networks and intelligent navigation. All links must have an early warning computer for real-time early warning. 2.4 Energy Transmission from Electronic Tag to Reader The energy returned through the electronic tag is proportional to its radar cross section A. The energy returned by the tag is: PBack = Sσ =
PTx GTx EIRP σ = σ 2 4π R 4π R2
(1)
The power density of the electronic tag returning to the reader is: SBack =
PTx GTx EIRP σ = σ 2 4π R 4π R2
(2)
Effective area of receiving antenna: AW =
λ2 GRx 4π
(3)
3 Experimental Research on Intelligent Dispatching Logistics Warehouse System Method Based on RFID Radio Frequency Data Processing Technology 3.1 Experimental Subjects and Methods This experiment takes the intelligent dispatch logistics warehouse as the experimental object, and combines the RFID radio frequency data processing technology to conduct experimental research on the operation of the system functions, and analyzes the system function problems, repairs, and response time of operations. Through the actual test method, the experimental results are obtained. 3.2 Data Collection Through group surveys, tasks are assigned, various functions and response times are actually tested according to the task arrangement, and the test data are collected and sorted to obtain the final experimental data.
558
Z. Song
4 Experimental Research and Analysis of Intelligent Dispatching Logistics Warehouse System Method Based on RFID Radio Frequency Data Processing Technology 4.1 Functional Experimental Research of Intelligent Dispatching Logistics Warehouse System In this experiment, the intelligent dispatch logistics warehouse system was used as the experimental object to test the various functional problems of the system in a month. The experimental research results are shown in Table 1: Table 1. Summary of test questions Total number of issues found Basic information management Warehouse management
Issues fixed
9
9
27
23
Outbound management
18
17
Move library management
19
18
Inventory management
11
9
9
6
System settings
Total number of issues found 27 23
Number of questions
30 25
18 17
20 15 10
Issues fixed 19 18 11
9 9
9
9
6
5 0 Warehouse Outbound Move library Inventory Basic information management management management management management
System settings
System functions
Fig. 1. Summary of test questions
As shown in Fig. 1, in the past month, the most problems occurred in the inbound management, with 27, and 23 have been repaired; followed by the outbound management and relocation management, with an average of about 18; the least problem is the basic
Intelligent Dispatching Logistics Warehouse System Method Based on RFID
559
information management and system settings, there are 9 problems, the basic information management has all been repaired, and the system settings still have 3 problems that have not been repaired. Therefore, it is necessary to increase efforts to apply RFID technology to make the system function more stable. 4.2 Analysis of Intelligent Dispatching Logistics Warehouse System Configuration In this experiment, the configuration of the system is studied, and the response time of various operations of the system is analyzed. The experimental research is shown in Table 2: Table 2. Operating system response time analysis Response time (s) Input data
1.63
Change the data 2.91 Delete data
3.48
Export data
8.02
As shown in Fig. 2, the operation response time of exporting data is the longest, 8.02 s, and the response time of other operations is less than 5 s. It can be seen from this that in terms of performance, it is still necessary to strengthen the number of concurrent
Actual operation
Input data
Change the data
Delete data
Export data
8.02
Delete data
3.48
Change the data Input data
Export data
2.91 1.63
Second
Fig. 2. Operating system response time analysis
560
Z. Song
users of the system, operation response time, and the memory usage and CPU usage of the application server and database server.
5 Conclusion In the research of intelligent dispatching logistics warehouse system, RFID technology has great advantages for the identification of cargo information. In the demand analysis stage, an in-depth analysis and understanding of the enterprise’s warehouse management business is carried out, and the basic functions of the system are clarified, and finally an intelligent warehouse management system that can meet user needs and has stable performance is realized. The system realizes the informatization of intelligent warehouse management business, realizes the organic combination of software and hardware equipment, and ensures that the data of each link of goods from storage, inventory, transfer, and out of storage can be recorded and updated in real time to ensure data integrity accuracy, managers can understand the inventory situation in time, can understand the inventory situation of each goods in real time, and can adjust the enterprise’s purchase, sales and inventory strategy in time according to the real-time data of inventory goods. With the help of RFID technology, enterprises can realize the whole process and comprehensive management of goods through scientific coding and modern RFID electronic tags from the beginning of storage. The system can record the shelf life, basic information, cargo location, quantity, specification parameters and so on of the goods. In this way, the system will give early warning according to the set time node, and the staff can quickly locate the cargo location, so that the time spent by the staff on finding the goods has been greatly saved, and the staff will almost never make operational errors again. The situation has greatly improved the efficiency of the company’s overall warehouse management.
References 1. Liu, M., Ma, J., Lin, L., Ge, M., Wang, Q., Liu, C.: Intelligent assembly system for mechanical products and key technology based on internet of things. J. Intell. Manuf. 28(2), 271–299 (2014) 2. Vaga, M., Galajdová, A., Duan, I., et al.: Wireless data acquisition from automated workplaces based on RFID technology. IFAC-PapersOnLine 52(27), 299–304 (2019) 3. Li, W., Xu, J., Niu, W.: A railway warehouse information acquisition system based on passive RFID tag. Int. J. Simul. Syst. 17(28), 25.1–25.5 (2016) 4. Kalidhasan, M., Chinna, P.R., Srinivasan, K.: Impact of radio frequency identification (RFID) technology on logistics and supply chain efficiency. Restaur. Bus. 118(9), 435–444 (2019) 5. Geng, J., He, Z.: Innovation and development strategy of logistics service based on Internet of Things and RFID automatic technology. Int. J. Future Gener. Commun. Netw. 9(12), 251–262 (2016) 6. Oner, M., Budak, A., Ustundag, A.: RFID-based warehouse management system in wool yarn industry. Int. J. RF Technol. Res. Appl. 8(4), 165–189 (2018) 7. Liu, Q.: Automated logistics management and distribution based on RFID positioning technology. Telecommun. Radio Eng. 79(1), 17–27 (2020)
Intelligent Dispatching Logistics Warehouse System Method Based on RFID
561
8. Ukasik, Z., Ulatowski, B., Ukasik, U.: The usage of RFID technology using big data in logistics processess. AUTOBUSY Technika Eksploatacja Systemy Transportowe 19(12), 780–782 (2018) 9. Brandao, F.B., Joao, C., Schwanke, D., et al.: RFID technology as a life cycle management tool in the liquefied petroleum gas industry. IEEE Lat. Am. Trans. 16(2), 391–397 (2018) 10. Tam, K.W., Hou, F., Dai, N., et al.: Guest editorial special issue on IEEE RFID-TA 2018 conference. IEEE J. Radio Freq. Identif. 3(3), 119–120 (2019) 11. Xiang, X.: Research of a pharmaceutical enterprise warehouse management system based on RFID technology. Sci. Res. 4(2), 43–47 (2016) 12. Costa, C.D., Campos, M.M.D.: Implementation of a RFID technology-based automatic traceability system for industry 4.0. Eur. J. Eng. Res. Sci. 4(8), 15 (2019)
Smart Travel Route Planning Considering Variable Neighborhood Ant Colony Algorithm Gang Zhao(B) School of Tourism Management and Services, Chongqing University of Education, Chongqing 400065, China
Abstract. Given the lack of autonomy in travel route planning and the difficulty to meet the individual demand of tourists at present, a smart travel route planning algorithm based on variable neighborhood ant colony algorithm is proposed. Firstly, models for the set of attractions of interest and characteristic inflections are established. The variable neighborhood ant colony is determined and combined to design a smart travel route planning algorithm based on the shortest path planning so that tourists can perform the smart planning of the optimal travel route after selecting scenic spots autonomously or using the intelligent machine intelligent machine. The numerical example has demonstrated that the proposed algorithm can output the route with the shortest distance while complying with the general rules of the travel ferry process, meeting the individual demand of tourists, and allowing them to satisfy the optimal motive interest. Keywords: Variable neighborhood ant colony · Smart planning · Travel route · Motive interest
1 Introduction Before tourists arrive at an unfamiliar tourist city, they need to master the most representative or most interesting scenic spots in the tourist city, especially tourist geographic information and surrounding service information, and plan their travel routes [1, 2], spend the least travel cost in the shortest time to acquire the optimal travel motive interest satisfaction [3, 4]. Hence, smart travel path planning is a hot topic in the field of smart travel study. Tourists have different levels of mastery of scenic spots. In particular, some tourists who do not know the tourist city, usually use the intelligent recommendation system to select the travel route or take the recommended routes from travel agencies [5, 6], travel websites, and travel books as references [7, 8], accepting the planned route passively, the degree of personalization is low. It can easily lead to the situation where the recommended attractions and routes fail to meet the individual demand of tourists and reduce the satisfaction degree of the tourists to obtain the motivation benefits. Among them, the intelligent recommendation system is planning the travel route. The preference of tourists to attractions of interest is not considered considered. Usually, only the shortest distance is the objective, while other travel geographic information is less considered, © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 562–570, 2022. https://doi.org/10.1007/978-981-16-5857-0_72
Smart Travel Route Planning Considering Variable Neighborhood
563
the recommended optimal route may not be the optimal determined by the tourist. Based on the current situation of travel route planning demand to solve the following two problems. Firstly, the recommended scenic spots should not only be representative but also meet the interests of tourists to the greatest extent. When the optimal travel route is planned based on the shortest distance, it is necessary to fully integrate the city service system directly related to the travel of tourists to maximize the satisfaction of tourists. In this paper, the variable neighborhood ant colony algorithm is used to establish a smart travel route planning algorithm based on the model of interest attractions and characteristic inflection points. Urban travel geographic information services are combined for the smart planning of the optimal travel route, and implementing the individual demand of tourists, the shortest route, and the combination of convenient geographic information services. Maximizing the motivation and interests of tourists.
2 Variable Neighborhood Ant Colony Algorithm One of the core objectives of smart travel is to develop individual travel routes for tourists to maximize their motive interests, to maximize tourists’ satisfaction with the development of travel routes and plans, and to enhance the image of travel cities. The prerequisite of planning individual travel routes is to determine the attractions of interest suitable for their own psychological demand and travel planning, so the classification and selection method of the attractions are the key. Providing users with a convenient and visualized selection platform for scenic spots is an essential prerequisite for planning a smart travel plan. Take scenic spots in the urban area as an essential prerequisite. The study objects and the classification criteria of scenic spots are different. Based on the specific demand of tourists and the attributes of the scenic spots, the classification criteria and related definitions of scenic spots are given here. Definition 1: The set of urban characteristic scenic spots . The set of all scenic spots in the intelligent machine is defined for the purpose of random selection by intelligent machineintelligent machine or subjective extraction of tourists within the urban space of a tourist city as the set of urban characteristic scenic spots, denoted by . Definition 2: A subset of urban characteristic attractions i . A subset of the urban characteristic attractions set divided by a certain criterion R within the urban space of a tourist city is defined as a subset of characteristic urban attractions, and each subset represents a category of attractions, represented by i , where i ∈ (0, p) ∈ Z+ , p is the type of characteristic urban attractions defined by the criterion R, that is, the number of urban characteristic attractions subsets, where p ∈ (0, maxp) ∈ Z+ . Definition 3: City characteristic scenic spot element j . Define any single scenic spot ∀j included in any urban characteristic scenic spot subset ∀i within the urban area of a tourist city as a city characteristic scenic spot element, denoted by j . To distinguish between different city characteristic scenic spot subsets i , the scenic spot element is expressed as Φj = Φi Φj (1)
564
G. Zhao
Definition 4: Scenic spot classification criteria. The specific principle of dividing scenic spots is defined based on the age structure of tourists, psychological demand, route arrangements, scenic spot characteristics and attributes, and other factors as the scenic spot classification criteria. The larger the value of the urban characteristic scenic spot set , p, the more scenic spots are classified. For the city characteristic scenic spot subset i , the number of characteristic scenic elements j is determined by the number of scenic spots accommodated in the city, that is, for each subset Φi = φj Φi φj , 0 < j ≤ max j, j ∈ Z + (2) A base matrix is established for the extraction of scenic spots with the set of city characteristic scenic spot and its subsets as the data source. The base matrix for scenic spots extraction is a data matrix for the intelligent machine to select scenic spots randomly. When tourists have no knowledge of the situation of the city’s scenic spots, provide hope to the intelligent machine The type and number of attractions of interest to visit, the intelligent machine randomly selects the characteristic attractions based on the age, interest, route, etc. of the tourists, and then calls the algorithm to plan the route. The attractions are established to extract the base vector A from the subset of characteristic attractions, and (3) Ai = Φi [φ1 ], Φi [φ2 ], · · · , i φj A maxp × maxj dimensional matrix is established with p and the base vector of the scenic spot extraction, the i-th row element of the matrix is the subset i element i [j ], and the maxj-j bit is 0. Hence, the base matrix A for extracting scenic spots is ⎤ Φ1 [φ1 ] Φ2 [φ2 ] · · · 0 .. .. .. .. ⎥ ⎢ ⎥ ⎢ . . . ⎢ . ⎥ ⎢ A = ⎢ Φi [φ1 ] Φi [φ2 ] · · · Φi φmax j ⎥ ⎥ ⎥ ⎢ .. .. .. .. ⎦ ⎣ . . . . 0 Φmax i [φ1 ] Φmax i [φ2 ] · · · ⎡
(4)
Tourists have to pass through blocks and intersections in the process of ferrying from one attraction to the next attraction. The intelligent machine selects the optimal block and intersection, which can provide tourists with the optimal experience of transportation and ferry and obtain motive interest satisfaction. The index characteristics are defined here. And characteristic inflection point set.
Smart Travel Route Planning Considering Variable Neighborhood
565
Fig. 1. Scenic spots and their characteristic inflexion
Figure 1 shows that tourists need to pass through several characteristic inflection points from the scenic spot j to the scenic spot j+1 , and each characteristic inflection point has a characteristic index factor. The iterative value for the optimal travel route is obtained based on the spatial distance dis(x) iterative characteristic index factor between the two attractions. algorithm modeling. Let disw, v, k be the shortest path length in the process of ferrying from point w to point v with the nodes in the set K = (k|0 < k ≤ maxk, k ∈ Z+ ) as the intermediate points. Based on the smart planning, the following can be obtained: 1) The shortest path passes through a point k in the set K: disw,v,k = disw,k,k−1 + disk,v,k−1 ; 2) The shortest path does not pass through a point k in the set K:disw,v,k = disw,v,k−1 . Thus, the following can be obtained: disw,v,k= min(disw,v,k-1 ,disw,k,k−1 + disk,v,k−1 ). If only the length of the path is considered, it may not meet the motivational interests of tourists. It should be combined with the actual geographic information service demand of the ferry process. The motivation iteration index is defined accordingly. Definition 5: Motivational iteration index for the sub-interval of scenic spots. The iterative output of the characteristic inflection point disw,v (km) and characteristic index factor γ between any scenic spot ∀φj and the next scenic spot j+1 is the index value that affects the motive interests of tourists. Definition 6: Motivational iteration index for scenic spots. Motivational iteration indicators for multiple scenic spots between the starting and the ending scenic spots. Based on the node ferry distance disw, v and the characteristic index factor γ. The scenic spot sub-interval motivational iteration index W, + 1 and the scenic spot interval motivational iterative index W are established, respectively, as shown in Eq. (5), where W is the spot j The initial value of the attraction motivation iteration. Wφj ·φj+1 =
max
v max
u
Wφj · disv−1,v · γu
(5)
v=2 u=1
Let there be k characteristic inflection points between the scenic spot j and the scenic spot j+1 , and the Open table and the Closed table are established. The Open table stores the unexpanded inflection points, and the Closed table stores the expanded nodes. Tourists extract the base matrix A from the scenic spots based on their own interests. From the behavioral unit where the vector is located, m characteristic attractions to be visited are extracted. When tourists are not familiar with the city’s attractions, the number
566
G. Zhao
of tourist attractions can also be provided based on the type of characteristic attractions, and m characteristic attractions are randomly selected by the intelligent machine. Firstly, the starting attractions are studied to establish the search algorithm for the second optimal route between scenic spots: 1) Put t characteristic inflection points R1 , R2 ,…, Rt into the Open table; 2) The characteristic inflection point R1 closest to the scenic spot j is selected, introduced into the Closed table, and deleted from the Open table. Search for the distance dis1 , a1 between R1 and R1 ’s subsequent neighboring nodes Ra1 , and use R1 , Ra1, and the path between the two. The characteristic index factor γ is iteratively calculated W1 , a1 ; 3) Return to step 2), another adjacent node Ra2 of R1 are searched, and W1 and a2 are iteratively calculated If W1 , a2 < W1 , a1 , save Ra2 in the Closed table and delete it from the Open table. If W1 , a2 > W1 , a1 , save Ra1 in the Closed table and delete it from the Open table. Go to step 4); 4) Ra1 ’s successor neighboring nodes Ra3 and Ra2 ’s successor neighboring nodes Ra4 are searched. The successor node is determined by the urban road distribution and the buffer radius r(j , Rt ). Wa1 , a3 and Wa2 and a4 are iteratively calculated; 5) From step 4), W1 , a3 and W1 , a4 are iteratively calculated. If W1, a3 < W1 , a4 , store the successor neighbor node Ra3 of Ra1 in the Closed table and delete it from the Open table. If W1 , a3 > W1 , a4 , store Ra2 ’s successor neighboring node Ra4 in the Closed table and delete it from the Open table; 6) Return to step 2) and continue searching until the characteristic inflection point Rt adjacent to the target scenic spot j+1 is found. The motivation iteration value W1 , maxW1 , t of the route with the smallest t is output. Each step of the algorithm shall meet ao ∈ (0, maxt) ∈ Z+ , where ao is the sequence subscript of R.
3 Case Analysis Given the age composition of tourists and the characteristics of scenic spots, taking maxp = 3, then maxi = 3, and the set of urban characteristic scenic spot satisfies = {i |0 < i ≤ 3, i ∈ Z+ }. Urban scenic spots can be divided into leisure scenery Set , Venues, playground set , and consumer shopping category . A city is taken as an example to establish a set of interest attractions based on different functional attributes, as shown in Table 1 below. Based on Table 1, p1 = 7, p2 = 6, p3 = 7, then maxp = 7. The base vector A1 = [1 [j1 ]], A2 = [1 [2 ]], and A3 = [1 [3 ]] are established, where j1 ∈ (0, 7) ∈ Z+ , j2 ∈ (0, 6) ∈ Z+ , j3 ∈ (0, 7) ∈ Z+ . After adding zeros to the base matrix of the scenic spot extraction, the scenic spot extraction base Matrix A3×7 . Tourists select the attractions to be visited from the base matrix corresponding database of the attractions based on their interest demand. When the tourists are not familiar with the attractions, the smartphone will provide the type and quantity of the attractions to be visited, and the smartphone will randomly select the characteristic attractions based on the demand of the tourists and the route.Take the selection of scenic spots 1 [2 ], 2 [3 ] and 3 [3 ] as examples. Tourists selectselect Bishagang Park,
Smart Travel Route Planning Considering Variable Neighborhood
567
Table 1. Scenic spot codes Characteristic attractions subset
1
2
3
Characteristic attractions elements
1 [1 ] People’s Park
2 [1 ] Henan Museum
3 [1 ] Wangfujing Department Store
1 [2 ] Bishagang Park
2 [2 ] Century Amusement Park
3 [2 ] Zhengzhou Department Store
1 [3 ] Bauhinia Mountain Park
2 [3 ] Erqi Memorial Hall
3 [3 ] Two or seven Wanda
1 [4 ] Forest Park
2 [4 ] hengzhou Museum
3 [4 ] Central Plains Wanda
1 [5 ] Greentown Plaza
2 [5 ] Zhengzhou Science and Technology Museum
3 [5 ] Dehua Pedestrian Street
1 [6 ] Zhengzhou Botanical Garden
2 [6 ] Zhengzhou Aquarium
3 [6 ] Xiyuan Square
1 [7 ] Zhengzhou Zoo
3 [7 ] Dashang Department Store
Erqi Memorial Hall and Erqi Wanda as the scenic spots to be visited, when tourists select 1 order of visit At the time, the output motivation iteration value is only one. When the tourist does not have a fixed tour order, the intelligent machine demand to calculate the motivation iteration value of the tour order and output the optimal value. If the tourist selects 2 [3 ] → 1 [2 ] → 3 [3 ], the paths between the scenic spots are shown in Fig. 2 and Fig. 3. The order of inflection points is arranged based on the buffer radius r(j , Rt ). The characteristic inflection point set is constructed based on the order of the tour, and the inflection point set is ferry distance disw, v characteristics. The index factors are shown in Table 2.The ferry distance between inflection points that are not directly connected is ∞, and the ferry distance between their own inflection points is 0. The algorithm steps 1)–6) iterative calculation of 2 [3 ] and 1 [2 ] scenic spots sub-interval motivation iteration index are shown in Table 2. The table shows that from scenic spots 2 [3 ] to 1 [2 ] output motivation iteration The minimum value is W1 , 8, that is, it passes through the inflection points R1 , R3 , R4 and R8. Similarly, the smart recursion point of view between 1 [2 ] and 3[3] is the minimum value of the motivation iteration via the inflection points R3 , R4 , R8 and R9 . W3,9 . From the scenic spot 2 [3 ] to 1 [2 ] and finally to 3 [3 ], via R1 , R3 , R4 , R8 in interval 1 and R3 , R4 , R8 , R9 in interval 2, the minimum motivation iterative value is out put, the route composed of the scenic spot, that is, the inflection point of the section, is the optimal route.
0.40
0.25
1.00
0.83
γ1
γ2
γ3
γ4
0.67
1.00
1.00
0.20
0.56
1.50
1.00
0.40
0.52
1.50
0.25
0.20
R4
0.64
1.00
0.50
0.40
R5
0.71
1.50
0.50
0.60
R6
0.77
1.00
0.25
0.40
R7
0.72
1.00
0.25
0.40
R8
0.70
1.00
0.25
0.40
0.70
1.50
0.50
0.60
R2
0.78
1.00
0.25
0.40
R3
R1
R3
R1
R2
1 [2 ] → 3 [3 ]
2 [3 ] → 1 [2 ]
0.63
1.50
0.50
0.40
R4
Table 2. Characteristic inflection point characteristic index factors
0.54
1.50
0.25
0.20
R5
0.62
1.00
0.50
0.40
R6
0.89
1.50
0.50
0.40
R7
0.80
1.00
0.50
0.40
R8
0.79
1.00
0.50
0.40
R9
568 G. Zhao
Smart Travel Route Planning Considering Variable Neighborhood
569
Fig. 2. Path and characteristic inflection point set between scenic spots 2[3] and 1[2]
Fig. 3. Path between the scenic spot 1[2] and 3[3] and the set of characteristic inflection points
4 Conclusions Based on the optimal route planning, a smart travel route planning algorithm based on the variable neighborhood ant colony algorithm is proposed in this paper. In the algorithm, indexes and constraints such as the shortest interval distance, road junction index, bus station transfer coefficient, subway station transfer coefficient, and road congestion are considered comprehensively for smart recursion of the path with the minimum iterative value of planning motivation as the optimal travel route. The numerical example has demonstrated that the proposed algorithm can output the route with the shortest distance while complying with the general rules of the travel ferry process, meeting the individual demand of tourists, and allowing them to satisfy the optimal motive interest.
References 1. Kaya, C., Kalayci, C.B.: An ant colony system empowered variable neighborhood search algorithm for the vehicle routing problem with simultaneous pickup and delivery. Expert Syst. Appl. 66(Dec.), 163–175 (2016) 2. Chen, J.M., Zeng, B.: Artificial bee colony algorithm with variable neighborhood search and path relinking for two-echelon location-routing problem. Jisuanji Jicheng Zhizao Xitong/Comput. Integr. Manuf. Syst. CIMS 20(5), 1228–1236 (2014) 3. Dellaert, N., Jeunet, J.: A variable neighborhood search algorithm for the surgery tactical planning problem. Comput. Oper. Res. 84(AUG.), 216–225 (2016) 4. Muelas, S., Latorre, A., Pena, J.M.: A variable neighborhood search algorithm for the optimization of a dial-a-ride problem in a large city. Expert Syst. Appl. 40(14), 5516–5531 (2013) 5. Cheikh, M., Ratli, M., Mkaouar, O., Jarboui, B.: A variable neighborhood search algorithm for the vehicle routing problem with multiple trips. Electron. Notes Discrete Math. 47, 277–284 (2015) 6. Amous, M., Toumi, S., Jarboui, B., Eddaly, M.: A variable neighborhood search algorithm for the capacitated vehicle routing problem. Electron. Notes Discrete Math. 58, 231–238 (2017)
570
G. Zhao
7. Xiang, W.L., Li, Y.Z., He, R.C., Meng, X.L., An, M.Q.: A multistrategy artificial bee colony algorithm enlightened by variable neighborhood search. Comput. Intell. NeuroSci. 2019(11), 1–19 (2019) 8. Mladenovi, N., Uroevi, D., Ili, A.: A general variable neighborhood search for the onecommodity pickup-and-delivery travelling saleman problem. Eur. J. Oper. Res. 220(1), 270–285 (2012)
Intelligent Translation Strategy Based on Human Machine Coupling Xiaohua Guo(B) Ganzhou Teachers College, Ganzhou 341000, Jiangxi, China
Abstract. No matter what kind of machine translation is based on, the quality of translation has always been concerned, and the computer is facing some difficulties in natural language processing in the process of translation. In the face of the above problems, this paper proposes an intelligent translation model based on human-computer coupling (HMCIT). The core strategy of HMCIT is to use natural language understanding to analyze the grammar and semantics of the text, and then use human-computer coupling intelligent translation technology to generate text summary after information fusion. Firstly, the Markov random field is used to map it to three tuples. Then, the semantic distance between the image and the sentence is calculated by similarity. Finally, the sentence with the closest semantics is selected from the sentence pool to generate the image description. In this paper, we compare HMCIT with the classical alignment model in terms of confusion. The results show that the confusion degree of the translation results of this model is 43% lower than that of the alignment based model, which indicates that the effect of HMCIT is better. HMCIT is to extract one sentence or several sentences from a document or document set to form a summary. Its advantages are simple and practical, and it is not easy to deviate from the main idea of the article. However, it may have some disadvantages, such as incoherent summary, poor word control, and unclear target sentence. The quality of the summary depends on the original text. Keywords: Human machine coupling · Intelligent translation · Language conversion · Grammar rules
1 Introduction Different from the traditional concept of coupling, the modern understanding of the criterion of translation coupling means that the translation results should be coupled with the grammatical rules, linguistic meaning and pragmatic functions of the original text, as well as the grammatical rules, linguistic meaning and pragmatic functions of the target text, that is, the linguistic elements of the original text such as words and phrases. In the translation, the corresponding content of the original should be transformed, such as the choice of vocabulary and the arrangement of sentences, in accordance with the grammatical rules of the target language; the various relational meanings embodied in © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 571–579, 2022. https://doi.org/10.1007/978-981-16-5857-0_73
572
X. Guo
the constituent elements of the original should be embodied in the translation without reservation. In the field of intelligent translation, Arno published a review paper in 2014, in which the application of associative lexical cohesion is used to analyze the complexity of text [1]. In the study of sentence simplification proposed by Baker Ma, English Wikipedia and simple English Wikipedia are used to generate a parallel simplified corpus, and Moses is used to provide preliminary intelligent translation results. It is found that there are 0.005 Bleu (Bilingual evaluation under study) improvements on the basis of non simplification [2]. In 2017, goux improved the automatic generation system based on template technology, and proposed an automatic generation method of template based on knowledge rules, which is used to dynamically select templates from the template set, so as to generate massive sports news quickly and effectively. The text generated by the system based on template set is more flexible and rich in content [3]. In 2018, in the research on TEG (topic to essay generation) task by ITO T and others, knowledge mapping was embedded as external knowledge to assist human-computer coupling intelligent translation. In the past, TEG only performed text generation based on a given topic, ignoring the background knowledge provided by common sense knowledge, which can effectively improve the novelty and diversity of generated articles [4]. Compared with the best baseline of Bleu score, kadmiri re’s experimental results have achieved a relative improvement of 11.85%. Therefore, human-computer coupling intelligent translation is assisted by knowledge map embedding, and the generated articles are novel and diverse, and the themes are consistent [5]. In the early stage, pipeline mode was used to realize intelligent translation. In Wang t’s research, sentence template was used to generate image description, and the template was in the form of four tuples [6]. However, due to the limitation of corpus and model, users can not directly control the content generation, and it is difficult to ensure that the output text content is consistent with the information in the input data. Based on the understanding, combing and analysis of language problems in the process of machine translation, this paper attempts to propose an intelligent translation strategy based on human-computer coupling. With the help of statistical methods, cosine similarity and text sentiment analysis, this paper discusses the coupling degree and accuracy of machine translation from the macro level of vocabulary, sentence, semantics and pragmatics, as well as the actual translation from the micro level of word frequency statistics, word collocation, average sentence length, syntactic analysis, text similarity, subject words, text sentiment state and sentiment tendency. It is hoped that this method can provide a reference standard for people who need machine translation to evaluate the quality of machine translation and avoid choosing inappropriate machine translation methods.
2 Human Machine Coupling and Intelligent Translation 2.1 Intelligent Translation Translation is a kind of language conversion. At present, this kind of behavior faces many problems, such as lexical ambiguity, syntactic structure, semantic understanding, information extraction, etc., which improves the quality of machine translation text to
Intelligent Translation Strategy
573
a certain extent, and the method of evaluating the quality of machine translation also presents a trend from manual evaluation to automatic evaluation [7, 8]. Different from the previous methods of machine translation quality evaluation, the most important feature of neural network machine translation in the 21st century, which is characterized by deep learning model, is that large-scale real text and language rules can be used for machine learning in a formal way. Through massive data training, the machine can automatically acquire useful features and knowledge, and the output results of machine translation are different with the change of formula [9, 10]. It can be said that, to a certain extent, the way of machine translation directly or indirectly affects the translation effect of the machine translation system. Since the translation quality is affected by the way of translation, what aspects of the translation quality of the machine translation system based on different ways should be evaluated, what are the standards for evaluating the quality of machine translation, and how to effectively evaluate the quality of machine translation advantages and disadvantages [11, 12]. The premise of answering these questions is to clearly realize that the basic problem of machine translation is about language, and the transformation of machine translation mode is all around the problem of language, so the first task to solve the problem is to deeply understand and master the language problems encountered in the process of machine translation [13]. Machine translation has its own characteristics. Rule based machine translation establishes a knowledge system in accordance with the target language grammar according to the preset language rules and non language rules. Corpus based machine translation combines statistical methods and selects the target language knowledge structure with the greatest similarity to the standard translation by taking examples as reference [14]. Neural network based machine translation uses encoder and decoder to extract the context information of source language and complete the transformation in the target language. The way of machine translation has changed from rule-based, corpus based to neural network-based. This transformation is to further solve the natural language processing problems in the translation process of machine translation system and improve the efficiency of machine translation. The quality of machine translation and natural language processing is a very complex process, which involves many problems such as character recognition, word disambiguation and so on. The natural language processing of machine translation discussed in this paper only focuses on the written text language, and discusses the specific operation difficulties and reasons of polysemy, syntactic structure, logical semantics and context application in the conversion process. Vocabulary is the basic unit of natural language processing. Compared with human beings, the computer does not have the cognitive ability of judgment and reasoning. In the case of “polysemy”, it is unable to determine the meaning of a word in phrases and sentences according to its surface meaning. The singular and plural forms of English words, concurrent parts of speech and tense changes make it very difficult for the computer to deal with them, while concurrent parts of speech are in words polysemy accounts for the largest proportion of polysemy. Noun, verb, adjective and adverb are the most common types of English polysemy. Polysemy means that the same word has more than two parts of speech. Reinforcement learning is to get rewards through the interaction between agent and environment, so as to guide the agent’s next behavior. The architecture of text generation based on reinforcement learning is composed of “strategy network”
574
X. Guo
and “value network”. In each time step, the two networks work together to calculate the next best generated word. This method does not directly estimate the reward, but uses the output of the test phase to normalize the reward instead of evaluating a baseline normalized reward. The image and text generation method based on reinforcement learning can optimize the exposure deviation problem in sequence learning, but it may also have the problem of high variance. The advantage of template based image description method is that it can effectively ensure the correctness of the generated text syntax and the relevance of the content. Due to the small number of visual models, the novelty and complexity of the generated sentences are not high. It is worth noting that rouge is based on word correspondence rather than semantic correspondence, but this problem can be alleviated by increasing the number of reference abstracts. The process of upgrading machine translation system is actually the process of improving the quality of machine translation. In recent years, neural network machine translation, which has been widely popular and developed, has significantly improved the quality of translation compared with rule-based and corpus based machine translation. However, this is not to say that neural network machine translation can translate independently from auxiliary machines such as rules, dictionaries and case bases on the contrary, it absorbs the experience and methods of the first two machine translation methods. 2.2 HMCIT Core Algorithm HMCIT model is based on RNN (recurrent neural network), which can capture the sequence features of input data. But RNN has two disadvantages. First, RNN short-term memory cannot produce coherent long sentences. Second, because RNN can not be parallel computing, it can not adapt to the mainstream trend. Therefore, in the field of HMCIT, coupling interaction is introduced x
σi = (H (fi ) · ufi ) | σi ∈ G, i = 1, 2, · · · , n N=
k 1 + ea−rt
(1) (2)
First, learn a language model P according to the reference sentence; then calculate the score of the candidate sentence according to the language model p; finally, standardize the score according to the sentence length. The calculation formula is as follows: P = ln(
k −n ) = a − rt N
(3)
The model is trained on Chinese Wikipedia corpus and general data. At the same time, in the process of pre training, the whole word masking technique is used sl e(σ, g)? = e( H (νi)vi · uμ , v) (4) i=s1
First, some variables and conditions are described. N is the lifetime utility of subject K at time t. The budget constraint of entity K is the same as Eq. (4) N =k+
N0 − k 1 + ( Tx )p
(5)
Intelligent Translation Strategy
575
After coding the source language with Bert, there are different language information representations at the bottom, middle and high level. This paper explores the influence of using different levels of Bert features on the translation effect of the model. Because the neural machine translation model uses the encoder to encode the semantic features of the source language, and then send them to the decoder for decoding, it is speculated that introducing the semantic features of the high level of Bert into the model should achieve better results.
3 Experimental Design 3.1 Model This paper proposes an intelligent translation model based on human-computer coupling (HMCIT). The core strategy of HMCIT is to use natural language understanding to analyze the grammar and semantics of the text first, and then use human-computer coupling intelligent translation technology to generate text abstract after information fusion. Finally, the confusion degree is compared with the classical alignment model. 3.2 Model Content Firstly, the Markov random field is used to map it to three tuples. Then, the semantic distance between the image and the sentence is calculated by similarity. Finally, the sentence with the closest semantics is selected from the sentence pool to generate the image description. The retrieval based image to text generation method can make the generated text grammatically correct and fluent. However, because the sentences in the sentence pool are used for image description, the generation effect is lack of novelty, and there are limitations in describing complex scenes or images containing novel things. Then there is intelligent translation. Intelligent translation refers to the generation of natural language text describing images according to the input image information, which is often used to generate headlines for news pictures, read pictures and tell stories in children’s education, medical image reports, etc. This technology can provide convenience for people who lack relevant knowledge or have dyslexia. According to the length and detail of the generated text, intelligent translation can be divided into automatic generation of image title and automatic generation of image description.
4 Intelligent Translation Strategy Based on Human-Computer Coupling As shown in Fig. 1, the rule-based and template based method is interpretable and controllable, which makes it easier to control and ensure the correctness of the output text. However, the disadvantage of the method is obvious, it is difficult to achieve endto-end optimization, the upper limit of information loss is not high, and it needs to rely on human intervention to extract high-quality templates. The variety, fluency and coherence of the generated content are often unsatisfactory. But its advantage is that it breaks through the fixed size input problem in the traditional model, and can grasp the
576
X. Guo
key points from the middle of the sequence without losing important information, so as to solve the problem that long-distance information will be weakened. HMCIT is to extract one sentence or several sentences from a document or document set to form a summary. Its advantages are simple and practical, and it is not easy to deviate from the main idea of the article. However, it may have some disadvantages, such as incoherent summary, poor word control, and unclear target sentence. The quality of the summary depends on the original text.
Fig. 1. Rule-based and template method
The results of syntactic man-machine coupling intelligent translation analysis are shown in Table 1. Syntactic analysis is an important prerequisite for semantic analysis. Taking syntactic structure as the evaluation parameter of the understanding and processing effect of the target language grammatical rules by the diagnostic computer program, its significance is that the syntactic analysis results of the translated text can clearly recognize and understand the relationship between the components of the target language sentences. It’s a relationship to make a judgment. As a traditional natural language processing task, the core problem of text summarization is how to determine the key information. Researchers found that the use of external knowledge, keyword information and other ways to better assist the generation of summary, while trying to avoid duplication, poor readability of these problems.
Intelligent Translation Strategy
577
Table 1. Syntactic man-machine coupling intelligent translation analysis results Item
Summary
Word count
Sentence subject
HMCIT
Template
Documentation
Deviation
1.93
0.11
1.26
1.62
1.82
1
Intervention
3.76
1.38
1.62
2.02
1.72
2.5
Unclear
4.58
2.54
2.95
4.14
5.8
5.48
Incoherent
1.36
5.78
5.09
2.05
1.58
4.39
Not easy to control
3.6
1.97
2.89
2.09
2.33
1.26
Table 2. The main performance of assessment in the sense of language Item
Rule
Documentation
Documentation set
Summary
Word count
Sentence subject
Deviation
1.51
0.04
0.85
1.14
0.06
1.01
Intervention
2.12
3.64
2.58
2.05
1.58
2.03
Unclear
2.54
2.5
4.75
2.26
5.88
3.66
Incoherent
1.23
5.8
3.99
5.47
4.22
2.15
Not easy to control
4.41
4.44
2.41
2.45
4.99
3.26
Template
2.75
3.94
4.11
4.65
5.55
3.78
As shown in Table 2, the evaluation of language meaning is mainly manifested in the calculation of text similarity and subject words. In the evaluation of semantics, through the calculation of the similarity of language meaning and the matching of language content between manual translation text and machine translation text, the coupling of machine translation system to language meaning translation is considered based on the relevance, logic and consistency of language content the degree of compliance. The model divides the task into two stages. First, content selection and planning are used to operate the input records of the database, and a content plan is generated to specify which records will be described in the document and in what order. Then, text generation generates output text and gives content plan as input. At the same time, copy mechanism is added to improve the effect of decoder. The experimental results show that the number of relevant facts in the output text and the order of these facts are improved, and the quality of generation is improved. As shown in Fig. 2, the confusion degree of the translation result of this model is 43% lower than that of the alignment based model. Nowadays, the main problem in the application of machine translation is the scarcity of language data resources and the lack of parallel data. The core work in the future is to build a high-quality parallel database to make the translation results more flexible. The error is back propagated to each layer
578
X. Guo
of the translation model through the recurrent neural network, and each layer adjusts the representation according to the error until the model achieves the best expected effect.
Fig. 2. Perplexity of translation results and alignment-based models
5 Conclusions HMCIT mainly analyzes the use and conversion of language by comparing the emotional tendency and degree of language fragments in machine translation and standard translation, and calculates the emotional polarity of translated text at word level, sentence level and text level respectively. The emotional nature of text can be negative or positive, and then further judges and calculates according to the type and proportion of polarity according to the analysis results of this paper, the emotional polarity of machine translation and standard translation are the same, and they are both positive text types. In terms of emotional tendency, also known as communication intention, it is mainly based on the algorithm to calculate the emotional value of the sentence from the relevance of words, and then judge the accuracy of the text according to the probability of
Intelligent Translation Strategy
579
emotion generally speaking, verbs with commendatory nature tend to match with nouns with positive nature, while verbs with derogatory nature tend to match with nouns with negative nature. The results show that the emotional polarity of HMCIT translation text is consistent with that of human translation text, but the difference between the emotional value of HMCIT translation text and that of human translation text is smaller, which indicates that the language use environment of HMCIT translation is closer to the pragmatic effect of the original text.
References 1. Rook, A.M., et al.: Effects of human-machine interface design for intelligent speed adaptation on driving behavior and acceptance. Transp. Res. Rec. 1937(1), 79–86 (2018) 2. Gou, X., Zhang, W., Zhang, F., et al.: Research and analysis of step intelligent model predictive control of generator excitation system based on the field of building installation. IOP Conf. Ser.: Earth Environ. Sci. 632(4), 45–49 (2021) 3. Baker, M.A., Chia-Hua, T., Noonan, K.J.T.: Frontispiece: diversifying cross oupling strategies, catalysts and monomers for the controlled synthesis of conjugated polymers. Chemistry 24(50), 992–996 (2018) 4. Ito, T., Takei, H., Kamata, M.: Reaction tendencies of elderly drivers to various target paths of proactive steering intervention system in human-machine shared framework. Int. J. Autom. Eng. 10(1), 6–13 (2018) 5. Kadmiri, R.E., Belaasilia, Y., Timesli, A., et al.: A coupled Meshless-FEM method based on strong form of Radial Point Interpolation Method (RPIM). J. Phys: Conf. Ser. 1743(1), 9–10 (2021) 6. Wang, T., Liang, Y., Yang, Y., et al.: An intelligent edge-computing-based method to counter coupling problems in cyber-physical systems. IEEE Netw. 34(3), 16–22 (2020) 7. Shu, J.M., Fu, C., Liu, J., et al.: An adaptive intelligent collaborative optimization method based on inconsistent information. J. Phys: Conf. Ser. 1684(1), 12–16 (2020) 8. Aboudi, J.: The behavior of cracked multiferroic composites: fully coupled thermo-electromagneto-elastic analysis. J. Intell. Mater. Syst. Struct. 29(15), 3037–3054 (2018) 9. Cao, X.C., Chen, B.Q., Yao, B., et al.: Combining translation-invariant wavelet frames and convolutional neural network for intelligent tool wear state identification. Comput. Ind. 10(6), 71–84 (2019) 10. Mirzaei, H., Eshghi, H., Seyedi, S.M.: Finely dispersed palladium on silk-fibroin as an efficient and ligand-free catalyst for Heck cross oupling reaction. Appl. Organomet. Chem. 33(8), 965–967 (2019) 11. Chen, L., Cheung, W.K., Jiming, L., et al.: Automatic extraction of behavioral patterns for elderly mobility and daily routine analysis. ACM Trans. Intell. Syst. Technol. 9(5), 1–26 (2018) 12. Wang, P., Cai, H.G., Wang, L.K.: Design of intelligent English translation algorithms based on a fuzzy semantic network. Intell. Autom. Soft Comput. 26(3), 519–529 (2020) 13. Aouiti, N., Jemni, M.: Translation system from Arabic text to Arabic sign language. J. Appl. Intell. Syst. 3(2), 57–70 (2018) 14. Bagherzadeh, M., Hosseini, H., Salami, R.: Polyoxometalate: upported Pd nanoparticles as efficient catalysts for the Mizoroki eck cross oupling reactions in PEG medium. Appl. Organomet. Chem. 34(1), 846–850 (2019)
Innovation of Accounting Industry Based on Artificial Intelligence Jinwei Zhang(B) Department of Economics and Management, Lanzhou University of Technology, Lanzhou, Gansu, China
Abstract. In recent years, with the rapid development of artificial intelligence, various industries have gradually introduced artificial intelligence to replace part of the manpower. Since 2016, artificial intelligence technology has been introduced into the financial field. Big Four have successively launched financial robots. More and more basic accounting positions have been replaced by robots. The unemployment rate of accountants without certificate has risen sharply, and the accounting industry is facing new challenges. Traditional financial work is done by artificial intelligence. Compared with manpower, artificial intelligence has a lower cost and more efficient. More and more accountants need to realize the transition from financial accounting to management accounting based on their own advantages. With the rapid development of science and technology, now that the future is coming. Accountants need to improve their comprehensive capabilities to adapt to the new environment in the era of artificial intelligence. Keywords: Artificial intelligence · Accounting · Transformation
1 Introduction The abbreviation for artificial intelligence is AI. Its research goal is to allow computers to perform programmed tasks that only humans could do in the past by imitating and learning. In 1956, the concept of artificial intelligence was proposed [1]. More than 60 years later, the development of artificial intelligence has made great progress today. Various industries have gradually introduced artificial intelligence technology. However, people have not paid enough attention to AI. In 2016, the artificial intelligence robot AlphaGo defeated Lee Sedol, and a year later, AlphaGO won the battle with Ke Jie. Since then, people have truly felt the rapid development and swift momentum of artificial intelligence. Accounting has been advancing continuously with the development of the society. Although the amount of data is increasing rapidly, the data processing method is becoming more and more convenient. From the initial abacus calculation to the later computerization, it still needs to be operated by humans. In essence, the financial personnel are still constrained by these basic and complicated tasks. In March 2016, Deloitte announced that it would try to introduce artificial intelligence into the financial field. Two months © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 580–586, 2022. https://doi.org/10.1007/978-981-16-5857-0_74
Innovation of Accounting Industry Based on Artificial Intelligence
Before
1956 The development of AI
2016
581
up to now
The applicaƟon of AI in accounƟng
Fig. 1. The development trajectory of AI and accounting
later, the first financial robot was born. This move of the Big Four is undoubtedly a signal to the industry that the era of introducing artificial intelligence into the financial field has arrived. These financial robots can replace accountants to do basic daily work, from bookkeeping to tabulation, and both efficiency and accuracy have far surpassed manual labor [2]. As shown in Fig. 1, the rapid development of AI technology has stimulated and led to the innovation of the accounting industry. Unlike humans, financial robots do not need to rest. They can work 24 h a day without stopping, and their work costs are extremely low. The status of financial personnel has been threatened. People were panicking for a while, and the future has come. These current problems in my country’s accounting industry need to be resolved urgently.
2 The Status Quo of Chinese Accounting Industry 2.1 Lack of High-End Talent
Fig. 2. Chinese accounting personnel qualification level distribution
As shown in Fig. 2, according to statistics, there are more than 24 million accountants in my country at the end of 2020. Most of them have only basic accounting skills and relatively low educational background. There are more than 5.8 million junior accountants,
582
J. Zhang
while the total number of medium and advanced accountants is less than 2.6 million, and there are only about 0.24 million certified practising accountants. Therefore, the basic accounting industry has been saturated, and high-end accounting personnel are still in short supply. Although the accounting profession has become more and more popular in recent years, many colleges and universities have successively opened accounting courses, course design is unreasonable and teaching method is biased towards the classroom. In the lesson, theory is too much emphasized, professors often ignore that accounting is a practical subject and do not teach students practical knowledge. This lead to that graduates are lack of practical experience. As a result, Chinese accounting industry lacks elites [3]. 2.2 Work Focus and Traditional Ideas Are Outdated Traditional accounting work focuses on the control of asset scale. In the era of artificial intelligence, accounting should pay more attention to the control of asset quality [4]. At present, many companies do not have good operational capabilities, but blindly expand the scale of assets, which often lays up trouble for the future development of the company. If the accounting work is just to make financial statements, then these tasks can be done by AI. Instead of people in the future. But in fact, accounting and tabulation are only afterthoughts, and this kind of thoughts makes it difficult for traditional accounting to create value for enterprises [5]. In the era of artificial intelligence, the traditional accounting concept of “valuing the past and despising the future” is outdated. 2.3 The Transition from Financial Accounting to Management Accounting is Slow Nowadays, the economic situation is becoming more and more complicated, and companies have higher requirements for accounting. Only focusing on analyzing the past and external reports can no longer meet the needs of companies. Accounting work should combine the past, present and future, so as to summarize the past, control the present, and plan in the future. Accountants should provide managers with information and materials needed for forecasting, decision-making, and assessment. The demand for management accounting by enterprises is increasing [6]. Some developed countries have already constructed and perfected the entire management accounting system, while China is still in its infancy. Until 2018, China still did not promote a unified certified management accountant certification exam, and the CMA imported from the United States is the main one. Therefore, the supply of financial accounting in China has exceeded demand, and management accounting is scarce.
3 Countermeasures and Suggestions 3.1 Construct a System of Knowledge that Combines Computer and Accounting It is very likely that the future accounting certificate examination will incorporate the application of artificial intelligence into the examination content. The traditional examination will keep pace with the times and make changes in the artificial intelligence era.
Innovation of Accounting Industry Based on Artificial Intelligence
583
Only in this way can accounting talents who truly adapt to industry changes be selected [7]. The future is coming, and accountants need to take precautions and expand their knowledge reserves in time. The development and application of artificial intelligence is still in its infancy. Accountants need to master many aspects of knowledge, build a knowledge system that combines computer and accounting, and achieve their own comprehensive development. While using AI., they must continue to make improvements. Make it more in line with the financial work of the enterprise, and at the same time improve one’s own abilities and accomplishments, adhere to “lifelong learning”. Making a long-term career plan, and setting ambitious goals from the grassroots to the top to make people continue to progress. Only with a positive attitude can we not be eliminated by the times [8]. 3.2 Shift the Focus of Work and Update Accounting Concepts In the era of artificial intelligence, the focus of accounting is to help regulate the scale of corporate assets, optimize capital structure, improve the efficiency and quality of investment. What is more, accountants also need to attach importance to incremental value and marginal costs, and correctly identify and grasp the development status of enterprises. Now accountants ought to bring accounting thinking and concepts to the forefront, such as management accounting, budget accounting, etc.. Although these can be achieved in ERP now, there are not many cases where they are actually implemented and bring value creation to the enterprise. Because the thinking of strategic management is still stuck in the past, the overall budget still stays at the system and paper level. Only by stepping out of the shackles of traditional concepts, bringing the functions of accounting to the forefront, planning and analysing in advance, can accountants gain a firm foothold in the future. 3.3 Transition from Financial Accounting to Management Accounting With a large number of jobs being “robbed” by AI., many grassroots accountants are about to face unemployment. In the face of crisis, accountants need to make correct judgments about the future and realize the transformation from financial accounting to management accounting. To put it another way, although it will bring certain unemployment problems, the emergence of AI frees accountants from the complicated and repetitive work. Accounting work is not equal to filling in vouchers, keeping accounts, and making statements. These low-tech but tedious tasks should not occupy too much working time of accountants [9]. Now that AI. has solved this problem. Accountants can return to the real accounting work. And even before then, the finance department, as a supporting department, does not bring value to the company. Only through transformation and participation in management and decision-making can the finance department create value for the company. Or in the future, with the widespread application of AI., the finance department is no longer a separate department, and the personnel in each department need to master certain financial knowledge. The finance will deepen from the department to the individual. By then, the overall ability of the individual will be required higher.
584
J. Zhang
3.4 Pay Attention to the Cultivation of Accounting Thinking At present, the development of artificial intelligence is not perfect. What is replaced by AI. is only basic accounting work. The analysis of data and reports and decision-making still need people to do it. Accounting personnel’s thinking and consideration as “human” are indispensable. The perceptual knowledge and rational thinking of “people” are the embodiment of the value of accountants. Professional knowledge and operating skills have never been the core competitiveness of accountants. If only from accounting work and computer operations, then non-accountants can also be proficient in operating the ERP system to make reports [10]. From the perspective of the development of the accounting function system in the future, basic knowledge and operational procedures will inevitably be replaced by intelligent systems. But these are not competitions between humans and AI. Human ability lies in thinking. For example, the financial staff’s understanding of the business, the analysis of the company’s business, and the thinking based on financial compliance, tax law compliance and risk control. Because each company’s business is different, there will be differences in the nature of the business actually encountered. How to find the tools and management methods suitable for the enterprise itself and bring value creation to the enterprise is a question that accountants really need to think about in the future. Compared with artificial intelligence, accountants can associate the essence and background of the business, identify the purpose of the company for a specific business, and think about how to manage it [11].
4 Conclusions 4.1 Artificial Intelligence Reduces Accounting Costs and Improves Efficiency and Accuracy From the abacus to the computerization, with the continuous improvement of tools, the work of accountants has become more and more convenient. Until now the birth of financial robots, it can replace humans to complete basic accounting work, and the accounting cost has been reduced. Unlike humans, robots do not make mistakes, do not need to rest, and can work around the clock. The efficiency and accuracy of accounting have been unprecedentedly improved. 4.2 Accounting Work Needs to Be Combined with Corporate Culture For now, the development of artificial intelligence is not perfect, and AI can only replace humans in completing simple tasks. Artificial intelligence will only act in accordance with established procedures and cannot flexibly adapt to the different systems of various companies. The cold machine has no thoughts, souls, and no values. As shown in Fig. 3, after all, an enterprise is composed of people. Only when the values of the employees and the values of the company are in line, can the employees have a strong sense of belonging, and the financial personnel can compare their accounting thinking with the actual situation of the company. And then, accountants make greater contributions to the enterprise.
Innovation of Accounting Industry Based on Artificial Intelligence
585
Corpo rate Cultur e
Accou nƟng Thinki ng
Q
Value s of Empl oyees
Fig. 3. The combination of corporate culture and accountants’ own situation
4.3 Artificial Intelligence is Still in Its Infancy Accounting has two basic functions, namely accounting and supervision. Although financial robots are already capable of performing most of the accounting work, the supervision work should still be done by professional accounting personnel. In addition, accounting can predict the future, participate in decision-making, and evaluate performance. All of these need to be performed by humans. What artificial intelligence can currently do is still limited to simple and programmatic accounting. The advent of the AI. era has brought huge challenges to the accounting industry, and the challenges will promote development faster. Accompanied by more opportunities, the development of science and technology will always bring pains to all walks of life. As the financial field is facing innovation, the structure of Chinese accounting industry will also change. But what AI. can accomplish is only basic work now. Human thoughts cannot be simulated by computers. The real accounting work is led by humans. Intelligentization will enable accounting to play a better role. Accountants should face artificial intelligence with a positive attitude. With the advent of the times, adapting to the times and controlling AI.!
References 1. Idrisov, F.F.: Approximate algorithms for estimating trends in financial intelligence tasks. Part II. Instants of origination of elements of financial flow are unknown. J. Autom. Inf. Sci. 50(2), 28–39 (2018) 2. Zhang, X.P.S., Kedmey, D.: A budding romance: finance and AI. IEEE Multimed. 25(4), 79–83 (2018) 3. Scott, B., McGoldrick, M.: Financial intelligence and financial investigation: opportunities and challenges. J. Policing Intell. Counter Terror. 13(3), 301–315 (2018) 4. Han, Y.: International standards for financial intelligence units: perspectives on CAML MAC. Int. J. Intell. Inf. Manag. Sci. 7(3) (2018) 5. West, K.: The Financial Intelligence Centre Amendment Act – the Dawn of a New Era, vol. 19, no. 3. JetBlue Publishers (Pty) Ltd. (2019) 6. Wang, X., Zhou, Y.: The application of management accounting in practice in the era of “Big Data, AI, mobile internet, and cloud computing”. Sci. J. Econ. Manag. Res. 1(6) (2019)
586
J. Zhang
7. Yao, L., Teresa, G., Isabel, L., Álvaro, R.: Financial accounting intelligence management of internet of things enterprises based on data mining algorithm. J. Intell. Fuzzy Syst. 37(5), 5915–5923 (2019) 8. Ping, M.: Discussion on the ways to improve the financial intelligence of small and micro enterprises. J. Phys.: Conf. Ser. 1682, 012074 (2020) 9. Simser, J.: Canada’s financial intelligence unit: FINTRAC. J. Money Launder. Control 23(2), 297–307 (2020) 10. Lagerwaard, P.: Flattening the international: producing financial intelligence through a platform. Crit. Stud. Secur. 8(2), 160–174 (2020) 11. Allan, D., Yoram, B., Gillian, H., Eric, H., Kate, L., Thore, G.: Cooperative AI: machines must learn to find common ground. Nature 593(7857), 33–36 (2021)
Risk Analysis of the Application of Artificial Intelligence in Public Management Min Kuang(B) Sichuan University, Chengdu, Sichuan, China
Abstract. From the rise of artificial intelligence technology in the 1950s to the present, it has developed tremendously in just over 60 years. The application of artificial intelligence (AAI) technology has widely penetrated into the daily life of human beings. Its development has brought us opportunities and challenges, as well as huge hidden dangers to social public safety. The purpose of this article is to study the risks of artificial intelligence in public management. This article first explains the AAI in public management, and conducts research and analysis from all aspects of life, including transportation, medical care, education, and security. Secondly, it discusses the risks of artificial intelligence in public management, and analyzes ethical risks, public safety risks, employment risks, and public environmental risks on the basis of the previous article. Finally, this paper further analyzes the risks of artificial intelligence in public management, and finally proposes preventive measures to better guide the development direction of artificial intelligence. Experimental research results show that the best practical application effect is in public education management, which is 89.03%, followed by public transportation and public security, with an average of about 83%, and finally in public medical management, the actual application effect is 79.27%. From the overall analysis, there is still a lot of room for artificial intelligence technology to be applied in public management, which needs to be explored. Keywords: Artificial intelligence · Public management · Application · Risk
1 Introduction From the rise of artificial intelligence technology in the 1950s to the present, it has developed tremendously in just over 60 years [1, 2]. The AAI technology has widely penetrated into the daily life of human beings. Its development has brought us opportunities and challenges, as well as huge hidden dangers to social public safety [3, 4]. People only care about enjoying the sense of science and technology that it brings to life, and ignore the silent penetration of various problems, which leads to the gradual expansion of some risks [5, 6]. In the research on the application risk of artificial intelligence in public management, many scholars at home and abroad have conducted research on it and achieved good results [7, 8]. Huang and Mccomas combed the development process of artificial intelligence and proposed the current Difficulties faced by development, and discussed © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 587–595, 2022. https://doi.org/10.1007/978-981-16-5857-0_75
588
M. Kuang
the possibility of future development [9, 10]. Wiener and Shepherd mentioned the issue of emotions between humans and machines. He put forward the question “Emotion is a unique way of thinking for humans. If the machine has emotions, can it replace humans?”, this question was carried out vigorously. Demonstration, this explains for people to recognize artificial intelligence and recognize machines [11, 12]. This article first explains the AAI in public management, and conducts research and analysis from all aspects of life, including transportation, medical care, education, and security. Secondly, it discusses the risks of artificial intelligence in public management, and analyzes ethical risks, public safety risks, employment risks, and public environmental risks on the basis of the previous article. Finally, this paper further analyzes the risks of artificial intelligence in public management, and finally proposes preventive measures to better guide the development direction of artificial intelligence.
2 Research on the Application Risk of Artificial Intelligence in Public Management 2.1 AAI in Public Management (1) Public transportation management In the field of transportation, artificial intelligence technology first made the public think of driverless cars, and then directly questioned its safety issues. Therefore, in the AAI technology in the transportation field, the first thing to be solved is that the public must increase their trust in artificial intelligence systems. Therefore, product development must improve the safety index of the artificial intelligence driving system and the reliability index of emergency. The AAI in traffic management can better prevent traffic accidents, maintain road safety, and implement effective and concurrent management of drivers, vehicles, and roads. (2) Public medical and health management As we all know, the medical field is a data-intensive industry. The advent of the big data era has increased the speed of data collection and integration and greatly improved the data accumulation in the medical field. When artificial intelligence technology is combined with big data technology, this brings good news to the medical and health field. When the combination of artificial intelligence technology and big data can be trusted by medical staff, it can be used reasonably and effectively. Similarly, this must be established under the conditions permitted by policies and regulations. In the fields of surgery, medical image recognition, and patient monitoring, the blessing of artificial intelligence technology can speed up the work efficiency of medical staff, improve the management system of the entire hospital, etc., and the combination of artificial intelligence technology and big data can infer potential health risks, And timely preventive work. (3) Public education management For a long period of time in the past, the successful AAI in education has changed the form of education, and the way people obtain education has become more and more convenient. The successful AAI technology in the education field does not mean that the traditional education model can be replaced, but it also broadens
Risk Analysis of the Application of Artificial Intelligence
589
the way of education to a certain extent. As long as there is a network environment, the education model of “anytime, anywhere” can be realized. However, in the use of robots, the most important thing is data. Artificial intelligence technology can compile the currently collected educational data into a computer for analysis, format the current excellent experience, and data-driven personalized educational resources. These effects are far different from traditional education. All in all, the role of artificial intelligence technology in the education field is mainly to integrate high-quality educational resources, real-time interaction, and achieve personalized teaching. (4) Management in the field of public security The research and AAI in the field of public safety and protection has made major cities rely on them now and even in the next ten years. Of course, this dependence is based on the trust of the general public. The AAI technology in the field of public safety and protection is mainly in the two aspects of people and vehicles. From a human perspective, the main applications of artificial intelligence are face recognition and pedestrian recognition; for cars, the main measure is the camera, but the accuracy and angle of the camera will decrease the accuracy. In the field of security, artificial intelligence technology mainly relies on deep learning to improve the accuracy of face recognition and greatly improve the safety of life and property of the public. The security field mainly relies on video recording to ensure people’s safety, and data becomes the core technology. However, with the continuous development of technology, video data will continue to increase, but it will bring trouble to the security field. At this time, the positioning system came into being, and rapid positioning can make it more intelligent. 2.2 Risks in the AAI in Public Management (1) Ethical risks As intelligent robots gradually enter the lives of human beings, in the long course of time, they may surpass and dominate human beings. Examples of human rights violations by intelligent robots will also occur one after another. Therefore, more and more people are involved in whether to give or not. However, as to whether intelligent robots can be given “human rights”, it should not be too one-sided, and must be analyzed and judged in accordance with certain regulations or guidelines. So now people must restrict the development of artificial intelligence technology from an ethical point of view, and guide the development of artificial intelligence technology in a direction that is more conducive to human interests. (2) Public safety risks The new technology developed earlier uses artificial intelligence technology to attack the text-to-speech conversion system. With this artificial intelligence technology, you can change any voice you want to change. Using this method, any audio can become what the attacker wants to output. Nowadays, smart speakers and voice assistants are everywhere in the world, and the new artificial intelligence technology will undoubtedly bring huge hidden dangers to social public safety. It can be seen that the current threshold for using artificial intelligence technology to commit crimes is too low, bringing an early warning to human society, and it is
590
M. Kuang
necessary to guard against the abuse of artificial intelligence technology leading to social public safety threats. (3) Unemployment risk Beginning with the first industrial revolution, the process of mankind’s use of technology to free his hands, which led to the problem of unemployment. This is the biggest difference from the previous two. Compared with the industrial revolution, artificial intelligence brings a deeper degree of revolution. Artificial intelligence technology is widely used in daily life, helping humans to analyze, judge and make decisions in various fields. (4) Environmental risks The rapid development of artificial intelligence depends on people’s constant demand for its results. As a technology, it still cannot get rid of the essence of alienation. Blindly focusing on commercial benefits has diminished the environmental problems that humans rely on for survival. During this evolutionary period, the exploitation of resources satisfies the production of parts and components and the elimination of products destroyed natural resources and the ecological environment, resulting in a serious relationship between man and nature. Imbalance, environmental ethics consciousness is seriously lacking in the development and application of the intelligent age. From the current point of view, the global population is expanding, resources are scarce, and a large number of weak artificial intelligence products cover the market. People’s desire for fashion and technology is becoming stronger and stronger. It is precisely because of the increase in people’s demand that capitalists have a large amount of environmental resources. In mining, scientific and technical personnel pay more attention to technological update, which makes the whole form turn to people’s lack of awareness of environmental care. When dealing with environmental pollution, it is based on the countermeasure of “pollution first, treatment later”, and this blind treatment method This has caused more serious environmental pollution, greatly increasing the methods, costs, and difficulty of governance. 2.3 Recommendations for Risks (1) Improve public risk awareness The risk of artificial intelligence technology is also limited by the public’s cognitive level, which leads to inaccurate or even exaggerated risk assessment. Therefore, it is necessary to improve the public’s risk awareness level, which can be done in the following ways. First, strengthening the public’s scientific knowledge education and understanding of scientific and technological knowledge will reduce the fear of risks in things. Second, through guidance, let the public not fear the perception of unknown risks. Scientific literacy will affect the public’s perception of risk. Therefore, improving the scientific literacy of the public is of great help for people to analyze risk issues. In addition, the risk communication between the public, experts and the government should be strengthened accordingly. First of all, we can strengthen the communication between experts and the public, and let the experts explain to the public how a specific technology is developed and what risks the technology will produce. Secondly, modern media is the main way for
Risk Analysis of the Application of Artificial Intelligence
591
the public to understand the sources of new technologies and technological risks. Therefore, establish an effective management mechanism for the media and do not allow some false news to mislead the public. Finally, it is necessary to transmit a correct scientific view to people through daily science popularization, and to provide reliable psychological support when mentally perceiving risks. (2) The moral responsibility of scientists First of all, we have to understand what the causal responsibility of a scientist is, that is, when a scientist invents a technology, the technology causes harm, and this relationship is a causal relationship. If the scientist did not invent that technology, it would not lead to such consequences. Secondly, scientists develop the technology voluntarily and freely. To sum up, the scientist should bear moral responsibility for his research in this situation. Therefore, it is not only necessary to consider the subject’s moral responsibility, but also to consider the object of the behavior-the object, and use another term “moral considerations” to classify moral responsibility, as well as to what threats the subject will receive the scope that scientists should consider. Therefore, when a scientist is engaged in a certain scientific research, he must bear certain responsibility for the influence of society. To strengthen the moral education of scientists, we can formulate ethical norms. At present, the industry has issued guidelines on the ethics of artificial intelligence technology, trying to restrain personnel engaged in artificial intelligence technology through moral education. (3) Improve relevant laws and regulations Provide the risk management method of artificial intelligence technology, and hope that the artificial intelligence-related industries can be restricted through sound laws and develop in a direction that is beneficial to mankind. At present, some countries have issued some related laws and regulations, and some industry standards formulated by some organizations hope to help the formulation of laws and regulations. Due to the monopoly of technology between countries, many algorithms are opaque, causing the weaker party to take a greater degree of risk, and so on. Therefore, there is an urgent need to formulate comprehensive laws and regulations internationally, and do a good job of preventing artificial intelligence technology in advance. Establishing a sound legal system is the most effective way to prevent the risks of artificial intelligence technology. 2.4 Risk Assessment Model Assuming that the occurrence of the threat is i, the security risk caused by the security incident can be expressed as the relationship between the value of the asset affected by the threat V, the probability of threat occurrence Pt, and the possibility of exploiting the vulnerability Pv. The security risk Ri can be quantified according to the following formula: Ri = V × Pt i × Pv i
(1)
If the system adopts security measures m, the security measures are applied to the system and become part of the system, and new threat factors j will be introduced. At this time, the risk Rim can be quantified as follows: Rim = V × Pt i × Pv i × Sm + Rj
(2)
592
M. Kuang
Among them, the risk value introduced by the security measures is: Rj = V × Pt j × Pv j
(3)
3 Experimental Research on the Application Risk of Artificial Intelligence in Public Management 3.1 Experimental Subjects and Methods This experiment takes the AAI technology in public management and application risk as the research object, using actual investigation methods to analyze the actual application effect of artificial intelligence in various public management, and to study the degree of risk of its application. 3.2 Data Collection This experiment collected and sorted the required data according to the task through a group investigation, and got the final experimental data.
4 Experimental Research and Analysis on the AAI in Public Management Risk 4.1 Application Analysis of Artificial Intelligence in Public Management This experiment conducts experimental research based on the AAI technology in public management, and analyzes the actual application effect of artificial intelligence technology in public transportation, public medical treatment, public education, and public security management. The experimental research results are shown in Table 1: Table 1. The actual application effect of artificial intelligence in public management Application effect Public transit
83.65%
Public medical
79.27%
Public education 89.03% Public security
82.49%
As shown in Fig. 1, the actual application effect is the best in public education management, which is 89.03%, followed by public transportation and public security, with an average of about 83%, and finally in public medical management, the actual application effect is 79.27%. From the overall analysis, there is still a lot of room for artificial intelligence technology to be applied in public management, which needs to be explored.
Risk Analysis of the Application of Artificial Intelligence
593
Application effect
Percentage
89.03%
83.65% 82.49% 79.27%
Public transit
Public medical Public education Public management category
Public security
Fig. 1. The actual application effect of artificial intelligence in public management
4.2 Application Risk Analysis of Artificial Intelligence in Public Management This experiment conducted experimental research on the risks of artificial intelligence technology in public management applications, and analyzed the most risky part of the technology in public management applications from various angles, so as to achieve targeted improvements. The experimental research results are shown in Table 2: Table 2. Application risk analysis Influence level Ethical risk
31.12%
Public safety risks
33.67%
Unemployment risk 21.48% Environmental risk
13.73%
As shown in Fig. 2, the largest public safety risk is 33.67%, followed by ethical risk at 31.12%, unemployment risk at 21.48%, and environmental risk at 13.73%. According to the data, we can know how to strengthen the prevention of various risks in a targeted manner.
594
M. Kuang
Influence level
13.73%
Risk category
Environmental risk
Unemployment risk
21.48%
Public safety risks
33.67%
Ethical risk
0.00%
31.12%
5.00%
10.00% 15.00% 20.00% 25.00% 30.00% 35.00% 40.00%
Percentage
Fig. 2. Application risk analysis
5 Conclusion As an important technology of human society, artificial intelligence technology has had a huge impact on our lives. Therefore, it is necessary to discuss the risks of artificial intelligence technology. Through the discussion of the nature of artificial intelligence technology risk, this article analyzes the internal causes and external effects of artificial intelligence technology risk, and proposes risk prevention strategies and plans. From the perspective of technological rationality and social rationality, it puts forward the viewpoint of the integration of humanities and technology on artificial intelligence technology. Secondly, it emphasizes the importance of improving public perception of risk in terms of public perception. Raise risk awareness in the understanding of artificial intelligence risks. Third, through the moral education of scientists, improve the accountability of artificial intelligence technology risks. Therefore, we hope that in the future study and research, we can further advance this research and make it more perfect.
References 1. Cierco, A.A.: Artificial intelligence in financial markets: cutting edge applications for risk management, portfolio optimization and economics. Comput. Rev. 58(11), 652 (2017) 2. Barzegar, R., Adamowski, J., Moghaddam, A.: Application of wavelet-artificial intelligence hybrid models for water quality prediction: a case study in Aji-Chay River, Iran. Stochast. Environ. Res. Risk Assess. 30(7), 1797–1819 (2016). https://doi.org/10.1007/s00477-0161213-y 3. Li, H.B.: Modeling method of tax management system based on artificial intelligence. Int. J. Artif. Intell. Tools 29(07n08), 2040023 (2020)
Risk Analysis of the Application of Artificial Intelligence
595
4. Gunasekeran, D.V., Tseng, R., Tham, Y.C., et al.: Applications of digital health for public health responses to COVID-19: a systematic scoping review of artificial intelligence, telehealth and related technologies. Npj Digit. Med. 4(1), 40 (2021) 5. Analysis on the application status of artificial intelligence in COVID-19. Chin. J. Hosp. Adm. 36, E013–E013 (2020) 6. Kobrinskii, B.A., Grigoriev, O.G., Molodchenkov, A.I., Smirnov, I.V., Blagosklonov, N.A.: Artificial intelligence technologies application for personal health management. IFACPapersOnLine 52(25), 70–74 (2019) 7. Galanos, V.: Exploring expanding expertise: artificial intelligence as an existential threat and the role of prestigious commentators, 2014–2018. Technol. Anal. Strateg. Manag. 31(4), 421–432 (2019) 8. Asadizadeh, M., Hossaini, M.F.: Predicting rock mass deformation modulus by artificial intelligence approach based on dilatometer tests. Arab. J. Geosci. 9(2), 1–15 (2016). https:// doi.org/10.1007/s12517-015-2189-5 9. Yingjun, H.: Cultivation of social risk management system from the perspective of public management: strategy, logic, and analysis framework. Adm. Forum 025(003), 104–111 (2018) 10. Mccomas, K.A., Scherer, C.W.: Reassessing public meetings as participation in risk management decisions. RISK: Health Saf. Environ. 9(4), 6 (2016) 11. Wiener, J.B.: Managing the iatrogenic risks of risk management. RISK: Health Saf. Environ. 9(1), 6 (2016) 12. Shepherd, E., Sexton, A., Duke-Williams, O., et al.: Risk identification and management for the research use of government administrative data. Rec. Manag. J. 30(1), 101–123 (2020)
Equipment Fault Diagnosis Based on Support Vector Machine Under the Background of Artificial Intelligence Lina Gao1(B) and Lin Zhang2 1 College of Information Science and Technolog, Bohai University, Jinzhou, Liaoning, China 2 School of Physical Education, Bohai University, Jinzhou, Liaoning, China
Abstract. Under modern production conditions, the scale of equipment is getting bigger and bigger, the structure is more and more complex, the function is more and more complete, and the degree of intelligence is getting higher and higher. Once the equipment system fails, it will directly affect the economic benefits, and sometimes it will produce serious social impact. Based on support vector machine technology, this paper studies equipment failure diagnosis. The basic principles of support vector machines are analyzed, the commonly used equipment failure diagnosis techniques are systematically summarized, and the equipment failure diagnosis process based on support vector machines is designed. In this paper, support vector machines are introduced into equipment failure diagnosis, using support vector machine models under different kernel functions. The computational complexity depends on the number of support vectors, avoiding the disaster of dimensionality, and having excellent generalization capabilities. Keywords: Artificial intelligence · Support vector machine · Svm · Equipment failure · Diagnosis process
1 Introduction The equipment operates under various environmental conditions and is subjected to various stresses and energy, which will cause changes in the technical state, that is, performance degradation, leading to failures. For a single type of failure caused by one main cause, as long as the mechanism of this type of failure is grasped, the degree of degradation of performance and the time of failure can be predicted quantitatively and preventive measures can be determined. However, the occurrence of failures is accidental, and the types of failures are complex. The causes of the failures are various and difficult to check. The failures will show obvious randomness. It is quite difficult to predict the occurrence of this type of failure. In terms of equipment, such occasional failures are easier to find, and can also be dealt with by means of post-maintenance. However, for large or complex equipment, failure will not only cause production shutdown and major economic losses, but also cause serious safety accidents and disasters. Therefore, postrepair methods cannot be used, and equipment failure diagnosis technology must be used. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 596–603, 2022. https://doi.org/10.1007/978-981-16-5857-0_76
Equipment Fault Diagnosis Based on Support Vector Machine
597
Device diagnosis is very similar to medical diagnosis. Regular inspections of equipment are equivalent to health inspections of the human body. The abnormal phenomenon of the equipment’s technical status found in the regular inspection of the equipment is equivalent to various symptoms found in the human body inspection. According to the technical status of the equipment, the analysis and judgment of the degree of equipment deterioration and the failure location, the type of failure and the cause of the failure are equivalent to the diagnosis of the location, name and cause of the disease based on the symptoms of the human body. Equipment failure diagnosis and human disease diagnosis are essentially the same. They also use temperature, color, noise, vibration, pressure, odor, deformation, corrosion, leakage and abrasion, etc., to indicate the various characteristics of the equipment state. It can be seen from this that the so-called “diagnosis” refers to the identification and identification of failures when an abnormal phenomenon occurs in the object of diagnosis, or an abnormality is found in a preventive inspection. The purpose of equipment diagnosis is to detect deterioration and failure symptoms as early as possible, or detect them when the failure is in a minor stage, and take targeted prevention or elimination measures to restore and maintain the normal performance of the equipment. This paper studies the failure diagnosis method based on support vector machine, promotes the standardization of enterprise equipment maintenance management, and improves the economic benefits of the enterprise.
2 Equipment Deterioration Deterioration of equipment means that the equipment has reduced or lost its specified functions due to wear or fatigue of parts and components, but due to deformation, corrosion and aging caused by the environment. Equipment degradation is a general term for the status of abnormal equipment operation, performance degradation, sudden failure, equipment damage, and economic value reduction. The process of equipment deterioration is the process of equipment wear. The wear of the equipment increases the fit gap, triggers vibration and impact, and damages some low-strength parts. This is not only manifested in the increase in sound and vibration, but also in frequent replacement of spare parts. As a result, economic benefits are reduced, production costs are increased, and unnecessary economic losses are caused. In order to avoid this phenomenon, it is necessary to study the problem of equipment degradation, so as to formulate specific measures to reduce the loss caused by the degradation [1]. The gradual process of equipment degradation is shown in Fig. 1. Equipment degradation can be divided into tangible degradation and invisible degradation according to its manifestation. Tangible deterioration can be measured with meters or instruments. The deterioration of mechanical equipment is usually caused by material wear or material properties. Intangible deterioration is due to technological progress, continuous improvement of equipment manufacturing processes, increased social labor productivity, and reduced reproduction value of similar equipment, resulting in a relative depreciation of the original equipment. Both tangible and intangible degradation cause the depreciation of the original value of machinery and equipment. The difference is that equipment that has suffered tangible deterioration, especially equipment with severe tangible deterioration, often cannot work until repaired; equipment that has suffered intangible deterioration, even if the invisible deterioration is severe, the physical
598
L. Gao and L. Zhang
form of fixed assets may not be deteriorated and can still be used. Whether the continued use is economically cost-effective requires analysis and research.
Fault Minor defects
Moderate defect
Serious defect
No downtime, temporarily does not affect the function
May cause short downtime or affect equipment functions
Already on the verge of failure, failure to repair will have serious consequences
Health management, prevent deterioration
Deterioration management, prevent failure
Failure management, reduce downtime
Active maintenance, advance maintenance
Preventive maintenance, condition maintenance
Corrective maintenance, breakdown maintenance
Fig. 1. Gradual process of equipment deterioration
3 Fundamentals of Support Vector Machines SVM is a two-classification model. The purpose is to find a hyperplane to divide the sample. The principle of division is to maximize the interval, and finally it is transformed into a convex quadratic programming problem to solve. The core technology of SVM is to construct the optimal hyperplane. The basic principle is shown in Fig. 2. The solid and hollow points in Fig. 2 represent the two types of samples, H is the classification hyperplane, H1 and H2 are the samples that are closest to the classification hyperplane H in the two types of samples, and are parallel to the hyperplane of H, between the two The distance is called the classification interval. The optimal classification hyperplane can not only correctly separate the two categories, but also maximize the classification interval. If H satisfies the optimal hyperplane condition, the training sample points on H1 and H2 are called support vectors. SVM models from simple to complex include: when the training samples are linearly separable, the hard interval is maximized to learn linearly separable support vector
Equipment Fault Diagnosis Based on Support Vector Machine
599
machines; when the training samples are approximately linearly separable, the soft interval is maximized to learn linear support vectors machine; when the training samples are linearly inseparable, learn nonlinear support vector machines through kernel techniques and soft interval maximization. For nonlinear problems, linear separable support vector machines cannot be effectively solved. A nonlinear model can be used to classify well. The training samples can be mapped from the original space to a higher-dimensional space, so that the samples are linear in this space dividable, if the original space dimension is finite, that is, the attributes are finite, then there must be a high-dimensional feature space where the sample is separable. Classification interval
Support vector H1
H
H2
Optimal classification hyperplane
Fig. 2. Basic principle of SVM
Support vector machines can only be used to identify two-class classification problems. In practical applications, many of them are multi-class classification problems. When the support vector machine is applied to a multi-classification problem, the algorithm of the support vector machine can be improved, and the form of the quadratic programming in the algorithm can be changed to adapt the algorithm to the multiclassification problem; the multi-classification problem can also be transformed into multiple classification problems, the combination of classification problems. Specifically, it includes four methods: one is, multi-class SVM. Based on the support vector machine, the objective function is optimized to achieve multi-objective classification. The advantage is that the optimization parameters of all sub-classifiers are considered at the same time and solved in an optimization problem at the same time. The disadvantage is that the scale of the optimization function increases with the increase of the category and the number of samples, which makes the objective function in the actual problem more complicated. The second is the “one-to-many” algorithm. In a classification problem, as many classification labels as there are, as many two-class support vector machine models are constructed. Each model corresponds to one class of labels as positive classes, and the remaining labels are negative classes. The advantage is that the number of support vector machine models is the same as the number of categories, but the disadvantage is that all samples need to be used as training data, which leads
600
L. Gao and L. Zhang
to a long operation time. The third is the “one-to-one” algorithm. In a classification problem, each two classes are paired into a set of two classification problems, and a support vector machine model is established. Each support vector machine model has only one positive class and one negative class. Finally, all classifiers are voted on. The advantage is that each The model only contains training data of two types of samples. The disadvantage is that the number of models increases exponentially as the number of categories increases. Fourth, structured SVM, mainly directed acyclic graph method and hierarchical multi-class method, has the advantages of fewer sub-classifiers and fast calculation speed, but the classification error is transitive, resulting in low classification accuracy and different structures. The classification results have a greater impact.
4 Equipment Fault Diagnosis Technology Failure diagnosis technology refers to the method of using various monitoring methods to judge whether the work is normal under the working state of the system. Based on the information obtained from the monitoring, combined with system operation and historical data, predict, analyze and judge possible failures, so as to maintain the equipment in time and ensure the normal operation of the equipment. Modern failure diagnosis technology can be roughly divided into failure diagnosis based on mathematical models and failure diagnosis based on artificial intelligence. The failure diagnosis based on the mathematical model is divided into input and output and signal processing, state estimation and process parameter estimation. failure diagnosis based on artificial intelligence is the most promising technology. In addition to failure diagnosis based on support vector machines, the following technologies are commonly used: (1) Failure diagnosis technology based on case-based reasoning [2]. Case-based reasoning is a relatively mature branch of advanced artificial intelligence, based on past actual experience or experience. Failure diagnosis based on case-based reasoning has a good learning ability, and a lot of maintenance experience will be preserved and used by on-site maintenance personnel. Equipment failures are similar, and solutions to similar failures are also similar. Therefore, by searching for similar equipment failure cases in the case base, the corresponding failure types and solutions can be obtained. If the solution cannot solve the new problem, the retrieved solution needs to be corrected, and then the new problem and the corresponding solution are stored in the case library. (2) Failure diagnosis technology based on expert system [3]. Expert system is a computer intelligent program system with specialized knowledge and experience. Through the modeling of human expert problem solving ability, it adopts the knowledge representation and reasoning technology in artificial intelligence to simulate complex problems that can be solved by experts, so as to achieve the same solution as experts. Expert systems are widely used in the field of failure diagnosis. The combination of failure detection and diagnosis technology with expert systems ensures the safety and reliability of the project. The knowledge base is a collection of expert domain knowledge. The inference engine comprehensively uses various rules based on the acquired information to perform failure diagnosis and output diagnosis results.
Equipment Fault Diagnosis Based on Support Vector Machine
601
(3) Failure diagnosis technology based on neural network. Neural network is a mathematical model or a calculation model that imitates the structure and function of a biological neural network, and is calculated by a large number of artificial neurons. The neural network failure diagnosis model includes three layers, the input layer, which receives various failure information from the system; the middle layer, which converts the failure information obtained from the input layer into targeted solutions through internal learning and processing; the output layer, according to the input failure form, after adjusting the weight coefficient, the failure handling method is obtained. The failure diagnosis of neural network model uses sample training to converge and stabilize the node connection weights, input the sample symptom parameters to be diagnosed into the network, calculate the actual output value of the network, and determine the failure category according to the output value pattern. (4) Failure diagnosis technology based on information fusion [4]. Information fusion is a new type of information processing technology. Coordination, optimization and comprehensive processing are the core content of information fusion, which mainly arises for the specific problems of multi-sensing systems. failure diagnosis technology based on information fusion, first, the data from the sensor is fused; then, the fused information and other aspects of knowledge are inferred according to certain rules; finally, the relevant data is stored in the database and combined The knowledge base or existing information database formed in the industry uses data mining technology to analyze from a more micro level to generate more valuable failure diagnosis information.
5 Process on Equipment Fault Diagnosis Based on Support Vector Machine Support vector machine is a widely used machine learning method. It is introduced into equipment failure diagnosis. Support vector machine models under different kernel functions are used to diagnose equipment failures. The complexity of calculation depends on the number of support vectors, not samples. The dimensionality of space avoids the “curse of dimensionality” in a sense. A small number of support vectors determine the final result, are not sensitive to outliers, can capture key samples and eliminate a large number of redundant samples, and have excellent generalization capabilities. Equipment failure diagnosis based on support vector machine is a complex process, which requires a series of processes to complete, including “data acquisition and preprocessing, support vector machine training and optimization, failure diagnosis and result output”, etc. The process is shown in Fig. 3 [5]. (1) Data collection and preprocessing. Collect the operating data of normal state and failure state, after preprocessing, it is divided into two parts: training sample base and test sample base. The training sample base is used to train the support vector machine algorithm model, and the test samples are used to test the accuracy of the failure diagnosis model. The collected data may have differences in order of magnitude or dimension, which will adversely affect the training convergence of
602
L. Gao and L. Zhang
the model. If the data is not preprocessed, the evaluation indicators of different dimensions cannot be comprehensively evaluated, and the magnitude of the data of different dimensions is quite different, which causes the model to fail to converge to the optimal value or the convergence speed is slow. Data preprocessing usually uses two methods: one is the maximum and minimum normalization method, which uses the maximum and minimum values in the collected data to scale the data to a certain scale through linear transformation, and maps to the interval from 0 to 1. Change the distribution characteristics of the data, and change the dimensional data into non-dimensional data. Second, the Z-score standardization method uses the mean and standard deviation of the collected data to process. It is used when the maximum and minimum values in a certain dimension of the collected data are in an unknown state, or the value of this dimension is outside the range of values. By this method, the value is scaled, and the data distribution conforms to a normal distribution with a mean of 0 and a standard deviation of 1.
Fig. 3. Process on equipment fault diagnosis based on support vector machine
(2) Support vector machine training and optimization. Support vector machine training needs to be improved in terms of improving the algorithm’s convergence speed and processing large-scale sample sets. At present, many solutions and improved algorithms have been proposed. The more commonly used ones include the following: first, the decomposition method, the basic idea is to solve the dual optimization problem through loop iteration. The original problem is decomposed into several easy-to-handle sub-problems, the scale of the problem to be solved by the optimization algorithm is reduced, and the sub-problems are solved repeatedly according to a certain iterative strategy, and the final result converges to the optimal solution of
Equipment Fault Diagnosis Based on Support Vector Machine
603
the original problem. Second, the incremental learning method [6], each time only a small batch of training samples that can be processed by the conventional quadratic programming algorithm is selected, and then the support vectors are retained, the non-support vectors are discarded, and the new samples are mixed for training until training The sample is used up. The advantage of this method is simplicity and high speed, but the disadvantage is that it relies too much on the support vectors in the historical training set, which may lead to premature loss of useful information, which directly affects the subsequent learning accuracy. Third, simulation sample training [7]. The artificially labeled training samples have problems such as limited sample number and difficulty in obtaining. The simulation sample generation algorithm can quickly generate large-scale training samples in a short period of time, and is used in support vector machine text classification. Show good performance and can be used as a supplement to machine learning when the number of manually labeled samples is insufficient. (3) Failure diagnosis and result output, the main content is to diagnose and classify the test data, and compare the output test result with the actual result, calculate the correct rate of each part of the diagnosis, and use it to guide the actual equipment maintenance work.
References 1. Liu, C.G., Liu, J., Li, X.: Discussion on reliability maintenance mode based on equipment deterioration prevention. Biotech World 10(6), 110–111 (2012) 2. Wang, F.Z., Li, Y.Y.: Research on fault diagnosis method of motor based on case-based reasoning. Manuf. Autom. 40(12), 11–14 (2018) 3. Li, H.M., Chen, L.: Fault diagnosis on servo system of shipborne satellite communication station based on expert system. Comput. Technol. Autom. 39(2), 6–11 (2020) 4. Sun, J.W.: Research and implementation on integrated service platform for equipment monitoring maintenance. Master’s thesis of Liaoning University of Technology (2019) 5. Yang, Y.D.: Research on fault diagnosis of marine auxiliary boiler based on support vector machine. Master’s thesis of Dalian Maritime University (2020) 6. Wang, X.D., Wang, J.Q.: A survey on support vector machines training and testing algorithms. Comput. Eng. Appl. 41(13), 75–78+175 (2004) 7. Zhang, H.S., Gao, H.B.: Research on the text classification of parallel SVM based on the simulated samples. J. Shaoguan Univ. 40(12), 13–17 (2019)
Integration of Artificial Intelligence and Higher Education in the Internet Era Meijuan Yuan(B) Changshu Institute of Technology, Changshu, Jiangsu, China
Abstract. With the continuous development of science and technology and the wide application of AI technology, AI system based on computer platform is gradually introduced into the field of education. In order to strengthen the integration of AI and higher education, the integration process is more rapid and innovative, the application of AI system is very necessary. This paper summarizes the concept of AI, puts forward the prediction model of AI, and discusses the application of AI in teaching. At the same time, it gives an overview of higher education and puts forward the management decision of higher education. The results show that: the number of papers published is on the rise, with 657 papers published in 2020, which is consistent with the rapid growth trend of AI technology after 2018, which also urges more and more scholars to explore the integration of AI technology and education. Keywords: Internet · Artificial intelligence · Higher education · Education equality
1 Introduction It is generally noticed that with the change of social and national needs, all teaching practices have changed over a period of time. Teacher education is such a field, with the progress and application of science and technology, teacher education has undergone amazing changes. The latest innovations and technological revolutions have greatly affected the affordability, accessibility and adaptability of these modern ICT and ICT related tools and gadgets. With the continuous development of science and technology, many experts have studied the integration of AI and higher education. For example, some domestic teams have studied the teaching mode of AI course, studied the random search algorithm based on AI, introduced the multimedia teaching method of AI course, and introduced the relevant simulation tools. A series of assignments and projects are put forward, and the influence of simulation environment on students’ interest level and self-confidence is evaluated. In the development process of the system, they carried out the activities of demand incentive, modeling and Se knowledge base construction. Using the function of AIML to provide appropriate response, and can quickly adapt to new knowledge areas. Two layer knowledge representation model of “knowledge body and object block” and two © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 604–611, 2022. https://doi.org/10.1007/978-981-16-5857-0_77
Integration of Artificial Intelligence and Higher Education in the Internet Era
605
layer reasoning principle model based on “frame reasoning and object block reasoning” are designed. This paper makes a qualitative evaluation of the classroom environment of the reform class, and makes a quantitative evaluation of the classroom environment of the reform class by using the University Classroom Environment Inventory (cucei). This paper makes a qualitative comparison of the classroom environment between the reform class and the ordinary class. The general monitoring system based on unsupervised neural network classifies the operation process in various educational application environments. The monitor classifies the sensing data into its own clusters and shows its potential diagnostic ability [1]. Some experts have studied the comprehensive training mode of higher education. The developed algorithm can make use of the principles of AI, linguistics, higher education and industrial engineering to identify and display the special terms of courses hierarchically. Based on the general instructional design theory, vocabulary is included in the syllabus as an auxiliary teaching method to promote barrier free engineering education. Use multiple comparator sets to generate engineering vocabularies for the course. Using the collected data, this paper discusses the effectiveness of this automated program in the context of engineering research methods, and determines how to make this program available to more educators in the field of engineering education. An easy to implement universal latch method is demonstrated, which allows a balance between persistence and flexibility in interrupt events. This assessment based system helps to automatically reassess current focus through existing action selection mechanisms. The mechanism has a flexible latch mechanism, which greatly improves the efficiency of processing multiple competing targets, but the cost is that the extra code (or cognitive) complexity is very small [2]. Some experts have studied the effectiveness of mobile intelligent terminal in higher education. The multidimensional data set model organizes the session data into three dimensions. When the data cube is ready, an efficient data mining algorithm is used for clustering and correlation analysis. The analysis results show that the network cluster can guide the implementation of prefetching system. This paper proposes an integrated web caching and Web Prefetching model, which solves the problems of prefetching attack, replacement strategy and network traffic increase in the integrated framework. The core of the integration solution is the prediction model based on the statistical correlation between web objects. By querying the data cube of web server log, the model can be updated frequently. This paper introduces the unified entropy theory of pattern recognition, reveals the information process of learning and recognition and the decisive role of mutual information. A subspace pattern recognition method based on maximum Mi discrimination is proposed to obtain the best recognition performance, which is of great significance for solving complex pattern recognition problems. Handwritten character recognition experiments show that the method is effective. Through the multi-disciplinary integration of the AI platform and the university sports management decision-making system, the multi-disciplinary comprehensive AI management system is constructed by using the computer multimedia technology and neural network technology, so as to improve the university sports management decision-making process. On this basis, the NSCT integrated ant colony optimization method is proposed. This method introduces a standard shrinkage filter in NSCT domain to generate the light source invariance of a given image. Then, in order
606
M. Yuan
to capture important geometric structure and reduce feature dimension, ant colony algorithm is used. This method can detect the edge better and improve the detection quality. Finally, image matching algorithm is used for recognition. The algorithm uses a group of feature points to explore their geometric relationship in graph arrangement. The argument is to use the knowledge needed for learning to build an instructional framework that challenges digital technology and provides a truly enhanced learning experience. A more reasonable background value calculation formula based on Newton interpolation is adopted, and a more ingenious initial condition algorithm is proposed by increasing any constant [3]. Although the research results of the integration of AI and higher education are relatively rich, there are still some deficiencies in the research of the integration of AI and higher education under the background of Internet. In order to study the integration of AI and higher education under the network environment, this paper studies the network, AI and higher education, and obtains the management decision of higher education. The results show that the Internet is conducive to the integration of AI and higher education.
2 Method 2.1 Internet and Education Equality With the rapid development of Internet and Internet of things, students can get more learning resources on the Internet. Therefore, managers only need to put recorded video on public resources and bring it to the person in need. This is a great change that the times and the development of science and technology bring to the form of educational resources. Equality is achieved on the level of influence on students. Due to the change of teaching mode and learning mode of intelligent education, the same teaching resources can be played repeatedly and the playback speed can be adjusted by itself. Students can fill in the knowledge points of dyspepsia in a targeted way, so as to achieve the same effect level as the students. In the era of AI, every student can maximize academic success, enrich the learning strategy toolbox and realize the overall development of individuals. 2.2 Artificial Intelligence (1) The concept of AI The principle of AI is to make the computer show the intelligence of research methods. As long as the direction is right, the computer can work effectively. The right direction is the most likely direction to achieve the goal, that is, to maximize the expected effect. AI is very good at “yes” and “no”, and it can comprehensively use the ability of observation, analysis and communication. With the upgrading of technology, the ability of AI to solve problems is also improving. However, machines are not human beings and have no imagination. When students use AI to assist learning, the process of critical thinking needs imagination. It’s almost impossible for machines, and it can’t help students. This is a process that needs students to accumulate. Students’ attention length and eye movement trajectory can be quantified to achieve the level of visual understanding. However, the change of thinking cannot be recorded by instruments, which is a process of subjective construction.
Integration of Artificial Intelligence and Higher Education in the Internet Era
607
(2) AI prediction model AI is a complete experimental teaching environment, which can generate small hypermedia courses online according to the needs. Students query the activation of a part of semantic network, which represents the key concepts of Chinese module domain. The system uses the active part of the network and the simple model of students to construct a small mapping hypertext, which students can explore. One of the main methods of intelligent prefetching is to sort potential web documents based on prediction model. The prediction model is trained on the log data of previous web servers and proxy servers, and prefetches the high ranking objects. In order to make the method work well, the prediction model must be updated constantly, and different queries must be answered effectively. (3) Application of AI in Teaching AI teaching application refers to the application of AI system in the teaching process. It is not only limited to computer-aided teaching, but also a teaching application that can identify and judge students’ emotions and effects in the whole teaching activities. AI teaching application can listen, read and write like human. Through the recognition of students’ language, text, action and other modules, we can truly achieve the goal of human-computer interaction in English teaching. 2.3 Higher Education (1) The idea of Higher Education Higher education is an important concept in the study of higher education related issues [4]. In a broad sense, higher education refers to the social activities based on secondary education and aimed at cultivating senior professionals, namely vocational education. From the perspective of higher education institutions, higher education is mainly provided through universities, colleges of art and technology, vocational and technical colleges and normal universities [5]. From the perspective of higher education, higher education includes vocational education, undergraduate education and graduate education. The basic admission conditions of these three kinds of higher education require that students must continue to study after the completion of secondary education, and only get a diploma or certificate can they get a degree [6]. (2) Decision making of higher education management Higher education in the use of management decision-making system at the same time, make teaching management more efficient, not only improve the management efficiency, but also promote the progress of teaching, improve the students’ physical exercise and learning effect [7]. The system can be used for training, can better teach and cultivate the management consciousness of teachers, provide reference for the rapid development of higher education and the education management of other disciplines, so that higher education can play a greater benefit [8]. In a narrow sense, private higher education refers to higher education institutions funded by enterprises, institutions, non-governmental organizations or citizens. China’s relevant policies have also made relevant provisions on the definition of private higher education [9].
608
M. Yuan
3 Experience 3.1 Extraction of Experimental Objects The input-oriented model measures the inefficiency of the evaluated DMU from the perspective of input, and measures the reduction degree of each input, so as to achieve technical effectiveness without reducing output; the output oriented model measures the ineffectiveness of the evaluated decision-making units from the perspective of output, and focuses on the extent to which each output can be supplemented without increasing input; undirected model is a comprehensive two-way calculation from two aspects of input and output. This paper mainly investigates college students’ views on the application of AI in higher education. 3.2 Experimental Analysis First of all, using the method of literature survey to analyze and sort out the existing literature, clear the development and application of basic mobile Internet in today’s era. In the process of analyzing the related literature of mobile Internet and higher education, we found the characteristics of mobile Internet behavior of college students in China. Find the relevant authoritative department reports and professional literature in the industry, synthesize all the results, and find out the existing problems [10]. First of all, the development of intelligence is inseparable from the Internet, equipment, capital, talent and other basic investment, which is the basic guarantee of manufacturing intelligence; Secondly, the application transformation and industrialization degree of technological achievements are the key to measure whether technology plays a practical role [11]. Intelligence depends on the data processing provided by hardware and software, as well as the research and development of new technologies and patents; finally, because the investment of intelligence requires a lot of capital and equipment, the market return and market efficiency of manufacturing enterprises need to be considered [12].
4 Discussion 4.1 Analysis of Foreign Related Research Status From the perspective of literature analysis, 1439 papers (data time dimension 2018– 2020) are published in webof science database, which are “AI” and “education”. The exploration and application of foreign countries are early, and the research field is also Table 1. Number of foreign papers published on AI from 2018 to 2020 Particular year Number of articles 2018
348
2019
434
2020
657
Integration of Artificial Intelligence and Higher Education in the Internet Era
609
data
very wide. The top five research directions include: Computer Science, educational research, engineering, mathematics and automatic control system. According to the relevant data of science network, as shown in Table 1. It can be seen from the above that in 2018, 348 papers were published with the keywords of AI and education; in 2019, 434 papers with AI and education keywords were published; in 2020, 657 papers will be published with the keywords of AI and education. The results are shown in Fig. 1.
700 600 500 400 300 200 100 0 2018
2019
2020
particular year Number of articles Fig. 1. Number of foreign papers published on AI from 2018 to 2020
It can be seen from the above that the number of published papers is on the rise. In 2020, 657 papers will be published with AI and education as keywords. This trend is consistent with the rapid growth trend of AI technology after 2018, and also urges more and more scholars to explore the integration of artificial intelligence technology and education. 4.2 The Internet Has Innovated the Teaching Methods of Higher Education With the rapid development of mobile Internet technology, it has a broad and far-reaching impact on higher education. The traditional way of higher education has limitations. The intervention of artificial intelligence makes the teaching methods of higher education get unprecedented enrichment and expansion. Higher education under artificial intelligence is not separated from the traditional educational activities, but more closely combined in teaching and environment. The views of college students on the application of artificial intelligence in higher education are shown in Table 2. It can be seen from the above that 27% of college students think that the application of artificial intelligence in higher education is “resource rich”, 21% think that the application of artificial intelligence in higher education is “learning efficient”, and 15% think that the application of artificial intelligence in higher education is “convenient learning”, the percentage of college students who think that the application of artificial intelligence in higher education is “fresh and interesting” is 24%, and the percentage of college
610
M. Yuan
Table 2. The proportion of college students’ views on the combination of artificial intelligence and higher education (percentage) Type
Percentage
Abundant resources
27%
Efficient learning
21%
Easy to learn
15%
Fresh and interesting 24% Inadmissibility
13%
students who think that the application of artificial intelligence in higher education is “unacceptable” is 13%. The results are shown in Fig. 2.
percentage
Abundant resources
Efficient learning
Easy to learn
Fresh and interesting
Fig. 2. The proportion of college students’ views on the combination of artificial intelligence and higher education (percentage)
To sum up, the mobile Internet provides social media for college students, so that college students can communicate anytime and anywhere in the mobile context, and can freely obtain learning resources. Most students think that human intelligence is helpful to their study.
Integration of Artificial Intelligence and Higher Education in the Internet Era
611
5 Conclusion With the change of science and technology, revolutionary changes have taken place in the way and method of education. The wide application of network technology and Internet in the field of education, as well as the opportunity of combining face-to-face teaching with network teaching environment, lead to the paradigm shift of providing teaching methods to learners. An important meaning of this change is that we need to re devote ourselves to creating an ideal learning environment for students, and adopt new teaching methods and technologies when appropriate. This paper reviews the integration of artificial intelligence and higher education. The results show that most students think that artificial intelligence is helpful to their learning.
References 1. Crompton, H., Song, D.: The potential of artificial intelligence in higher education. Rev. Virtual Univ. Católica Norte (62), 1–4 (2021) 2. Dennis, M.J.: Artificial intelligence and higher education. Enroll. Manage. Rep. 22(8), 1–3 (2018) 3. Woods, R., Doherty, O., Stephens, S.: Technology driven change in the retail sector: the implications for higher education. Ind. High. Educ. 36(2), 1–15 (2021) 4. Black, J., Fullerton, C.: Digital deceit: fake news, artificial intelligence, and censorship in educational research. Open J. Soc.Sciences 08(7), 71–88 (2020) 5. Murugesan, L.J.: Investigation on enabling intelligence through deep learning and computer vision-based internet of things (IoT) systems in a classroom environment. Biosci. Biotechnol. Res. Commun. 13(6), 80–91 (2020) 6. Kadhim, M.K., Hassan, A.K.: Towards intelligent E-learning systems: a hybrid model for predicatingthe learning continuity in Iraqi higher education. Webology 17(2), 172–188 (2020) 7. Ulloa-Cazarez, R.L., Aoun, J.E.: Robot-proof: higher education at the age of artificial intelligence. Genet. Program. Evol. Mach. 21(1), 265–267 (2020) 8. Cox, A.M.: Exploring the impact of Artificial Intelligence and robots on higher education through literature-based design fictions. Int. J. Educ. Technol. High. Educ. 18(1), 1–19 (2021). https://doi.org/10.1186/s41239-020-00237-8 9. Pence, H.E.: Artificial intelligence in higher education: new wine in old wineskins? J. Educ. Technol. Syst. 48(1), 5–13 (2019) 10. Tushar, M.R., Ladda, R.M., et al.: Artificial intelligence, its impact on higher education. SSRN Electron. J. 6(4), 513–517 (2019) 11. Limani, Y., Hajrizi, E., Stapleton, L., et al.: Digital Transformation readiness in higher education institutions (HEI): the case of Kosovo. IFAC-PapersOnLine 52(25), 52–57 (2019) 12. Zhan, S.: The reconstruction strategy “internet+” from? The perspective of education. (6), 60–62 (2017)
Diagnostic Study on Intelligent Learning in Network Teaching Based on Big Data Background Xiaoguang Chen(B) and Fengxia Zhang Applied Technology College of Dalian Ocean University, Dalian, Liaoning, China
Abstract. Under the big data background, intelligent diagnosis is an important part in artificial intelligence. The main part of intelligent learning diagnosis is to achieve accurate and efficient state of knowledge of learners, and the network teaching system can determine learners’ learning disabilities based on it, and provide further learning content for the learners. The paper gives the diagnostic model and the related method of design and implementation of intelligent network teaching under the background of big data, which can further improve the intelligence of network teaching platform; Leading the technology of big data, cloud computing, machine learning algorithms and intelligent evaluation to intelligent learning diagnosis will not only broaden the research field of intelligent learning diagnosis, but also provide a new method which can improve the network learning performance. Keywords: Big data · Network teaching platform · Intelligent learning diagnosis · Knowledge space theory
1 Introduction Under the background of big data, along with the acceleration of mobile equipment users, network education based on the web has become a new form of education, which is not restricted by time, space. Learners can learn through the network at anytime, at anywhere, thus realizing the real open education, lifelong education. As a way of education different from traditional education, it has been widely concerned and studied by many experts and scholars at home and abroad in recent years. In foreign countries, some famous online teaching platforms such as Academic Earth, Coursera, Creative Live, Knewton and so on [1]. In China, many colleges and universities have opened network teaching platforms and carried out course teaching reform research under the network teaching environment. For example, Peking University, Tsinghua University, Beijing Normal University and Central China Normal University have all opened network teaching platforms. In network teaching situation, the learners’ learning is mainly to explore, discover new knowledge, and to compete and cooperate with each other through the network, both teachers and students interact and communicate through © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 612–618, 2022. https://doi.org/10.1007/978-981-16-5857-0_78
Diagnostic Study on Intelligent Learning in Network Teaching
613
the network, traditional face-to-face communication between teachers and students no longer exists, instead, the learning environment supported by net technology. When learners encounter learning difficulties, artificial intelligence, big data analysis technology and big data visualization can help learners make intelligent diagnosis, meanwhile, the research on providing more optimized learning diagnosis services for learners are also gradually carried out. Learning diagnosis is intelligently based on the structural analysis of subject knowledge, and its diagnosis conclusions will be used to update the learner model. Therefore, intelligent learning diagnosis has become an important research topic.
2 Summary of the Big Data Big data is the synthesis of data and big data technology, which is characterized by large amount of data, various types of data, fast processing speed and low value density. Big data technology mainly includes data collection, data storage and management, data processing and analysis, data security and privacy protection and other aspects. The applications in many fields need new technologies and tools to store, manage and analyze data. Meanwhile, new tools, processes and methods give support to the new technical architecture, the new architecture of big data technology mainly includes base layer, management layer, analysis layer and application layer, in which base layer mainly refers to the virtualization and networking, distributed horizontally extensible architecture [2]; The management level is mainly about parallel processing of structured data and unstructured data and linear scalability. The analysis layer is mainly about selfservice, iterative, flexible, real-time collaboration; the application layer is mainly about real-time decision making, built-in prediction ability, data driving, data monetization, etc. Big data technology can be divided into two aspects: overall technology and key technology. The overall technology mainly includes data acquisition and storage, data architecture, data processing and analysis, data mining and result presentation, etc. Key technologies include data acquisition and processing technology, storage and management technology, data analysis and mining technology, data presentation and application technology, etc. Spark, HPCC, Storm, Apache Drill and other tools are generally adopted for big data analysis. Big data analysis generally includes predictive analysis, visualization analysis, mining algorithm, semantic engine, data quality and data management, etc. The main technologies of big data analysis generally include deep learning, knowledge computing and visualization. The goal of big data processing is to mine knowledge from massive heterogeneous data, including data source collection, data storage management, data analysis, data display and acquisition, etc. [3]. Under the background of big data, Massive Open Online Courses (MOOC) are gradually rising at home and abroad. However, there are few researches on intelligent learning diagnosis in online teaching at home and abroad. So it is necessary to study intelligent learning diagnosis in network teaching under the background of big data.
614
X. Chen and F. Zhang
3 Intelligent Learning Diagnosis Learning diagnosis is called cognitive diagnosis abroad. It depends on the learner’s response pattern to the test question which converts the single test score into the probability of mastering the knowledge and skills involved in the test [4]. Learning diagnosis is a kind of visual term applied in the field of education. In the process of learning, learners will encounter problems such as cognitive conflicts and knowledge defects, which is often seen in solving the problem appeared in the process of the obstacles. Learning diagnosis can correct and make up the defects and errors in the study [5], teachers can understand the characteristics of teaching object, advantage, deviation and defect through the diagnosis. Through diagnosis, teachers and students can clearly know the direction of their efforts [6]. The author believes that learning diagnosis is to use scientific and effective methods to find out the learning disabilities of learners, to accurately implement diagnosis, so as to achieve the teaching objectives. Learning diagnosis generally includes learning ability diagnosis, learning attitude diagnosis, learning method diagnosis and academic achievement diagnosis. Intelligent learning diagnosis refers to the use of scientific and effective methods, and with the help of artificial intelligence, big data technology and knowledge space theory to find out the learning barriers of learners, accurate implementation of diagnosis, to achieve the teaching objectives.
4 Intelligent Learning Diagnosis Model in Network Teaching Under the Background of Big Data Under the background of big data, an intelligent learning diagnosis model in network teaching can be built, which can be used to analyze the learning problems of learners’ groups or individuals and the mastery of knowledge points in the network environment by tracking and collecting the learning process data of learners, and analyzing the learning problems through case analysis and effect detection [7]. The intelligent learning diagnosis model designed according to the learning diagnosis strategy is shown in Fig. 1. This diagnostic model mainly includes education big data resources and HADOOP Data Processing Platform, Adaptive Publishing Engine, Test Bank and Target Knowledge Structure Map, Diagnostic Agent, Discipline Knowledge Bank and Learning Archive, Test Agent and Student Model Library, Cloud Service Layer and Cloud Service Layer Client etc.
5 Design and Implementation of Intelligent Learning Diagnosis System Under the Background of Big Data 5.1 Intelligent Learning Diagnosis System Architecture The architecture of intelligent learning diagnosis system is divided into four layers: data support layer, public object layer, public service layer and application layer.
Diagnostic Study on Intelligent Learning in Network Teaching
615
5.1.1 Data Support Layer This layer mainly includes the multi-library coordinator, the database management system abbreviated as DBMS, and the underlying supporting network environment system, which is the basic environment for the whole system to run. 5.1.2 Public Object Layer This layer is the most important layer. The public object layer contains some service objects that perform common functions required by most applications in the service domain. The objects served can be multiple, such as A, B, C, and so on. 5.1.3 Public Service Layer This layer is located on the top of the public service object layer, which provides some basic functions for specific business domain applications, and is represented in the form of business processes. Public service process defines the common behavior or functionality of the business domain. 5.1.4 Application Layer This layer is the highest level of the whole part, mainly including the interface display object, online help object and intelligent query diagnosis object. This architecture is a typical hierarchical architecture that allows designers to decompose and abstract a complex problem at different levels [8]. The different levels in the system reflect the different levels of abstraction of the system (Note: A BO/ODBC: Active X data objects/open database interconnection), whose overall structure is shown in Fig. 1.
Fig. 1. Intelligent learning diagnosis system
616
X. Chen and F. Zhang
5.2 Intelligent Learning Diagnostic Agent Structure The structure of intelligent learning diagnosis Agent mainly includes three parts: knowledge learning machine, fault diagnosis machine and diagnosis knowledge base, as shown in Fig. 2. The knowledge learning machine is the main part of the intelligent learning diagnosis Agent, which realizes the interaction between the fault diagnosis Agent and the learning environment. Earlier learning and later learning are two parts of the learning process [9]. The learning in the preliminary establishment stage of the diagnostic knowledge base can be classified as prophase learning. The learning that fault diagnosis machine can not diagnose or the learning that diagnosis results and the actual results differ a lot can be classified as later learning. In the later study, knowledge perception function can quickly acquire new informations, new knowledge can be saved in the diagnostic knowledge base after acquirement. The interaction between Agent and the diagnosis environment is a good manifestation of the fault diagnosis machine. The diagnosis information of the sensing diagnosis environment is realized by the perceptron. After receiving the status information of the diagnosed object, the perceptron quickly transmits the information to the fault diagnosis engine, the fault diagnosis engine carries out diagnostic reasoning and decision making based on the diagnosis knowledge in the knowledge base; Finally, the diagnostic results are applied to the diagnosis environment by the output machine, and the communication mechanism is added to transmit the results to the learning machine.
Fig. 2. Intelligent learning diagnostic agent structure
The diagnostic knowledge base is an indispensable part of the intelligence diagnostic Agent. It is the knowledge storage center of the diagnostic Agent. The diagnostic intelligence of Agent is represented by its richness, which has a variety of forms of knowledge representation and storage.
Diagnostic Study on Intelligent Learning in Network Teaching
617
5.3 Design of Intelligent Learning Diagnosis System in Network Teaching To solve the problem of complex process system diagnosis, an intelligent diagnosis strategy is adopted, that is, the complex process system is divided into multi-level subsystems by using the principle of system classification to form multiple levels, and then the most suitable specific diagnosis methods are adopted for different levels to determine the root cause of the failure barrier step by step. Thus, the complex diagnosis problem is reduced to some simple intelligent diagnosis problems [10]. The prototype diagram of intelligent learning diagnosis system for network teaching is shown in the figure. In this system, learners are the center, because intelligent diagnosis serves learners and learners can interact with each other through the interface. The prototype diagram of the whole system is shown in Fig. 3.
Fig. 3. The prototype diagram of intelligent learning diagnosis system
6 Conclusion At present, most of the teaching in the network teaching environment only pay attention to the testing and evaluation function, which can only provide a score level and simple evaluation results to learners, instead of providing the learners with the diagnosis test results of learning, or the feedback information given is limited, and it is difficult for learners to improve their learning performance. Under the background of big data, it is necessary to realize the intelligent learning diagnosis function in network teaching by adopting the intelligent learning diagnosis model and the design of the corresponding diagnosis system constructed by big data technology, artificial intelligence technology
618
X. Chen and F. Zhang
and cloud computing. The system design of intelligent learning diagnosis proposed in this study is to realize the intelligent test process and intelligently realize the diagnosis of learning disabilities. This study can provide some reference value for the researchers of intellectual learning acquisition and diagnosis in the network teaching under the background of big data.
References 1. Zhang, J.: Big Data Daily Record Architecture and Algorithm. Electronic Industrial Press, Beijing (2014). (in Chinese) 2. Dietrich, A.: Knowledge Structures. Springer, Berlin (1995) 3. Long, H., Yang, H.: Research on data analysis and visualization in the background of big data. J. Kaili Univ. 34(3), 98–102 (2016). (in Chinese) 4. Zhang, G., Zheng, W.: Big data streaming computing: key technologies and systems instance. J. Softw. 25(4), 839–862 (2014). (in Chinese) 5. Huang, S., Ge, M.: The application research of hadoop platform in big data processing. Gener. Comput. (Prof. Edn.) (29) (2013). (in Chinese) 6. Siemens, G.:1st International Conference on Learning Analytics and Knowledge (2011). https://tekri.Athabaseau.Ca/analytics/about. Accessed 22 Jan 2015 7. Yan, J.H.: Cognitive styles affect choice response time and accuracy. Pers. Individ. Differ. 48(6), 747–751 (2010) 8. Kagan, J., Rosman, B.L., Day, D., et al.: Information processing in the child: Significance of analytic and reflective attitudes. Psychol. Monogr.: Gener. Appl. 78(1), 1–37 (1964) 9. Sulisawati, D.N., Lutfiyah, L., Murtinasari, F.: Difference of mistakes reflective-impulsive students in mathematical problem solving. Int. J. Trends Math. Educ. Res. 2(2), 101–105 (2019) 10. Sheppard, L.D., Vernon, P.A.: Intelligence and speed of information-processing: a review of 50 years of research. Pers. Individ. Differ. 44(3), 535–551 (2008)
Design of Online OBE Theoretical Knowledge Sharing Based on the Support of Intelligent System Analysis Method Jinsheng Zhang(B) Intelligent Science and Engineering, Yunnan Technology and Business University, Kunming 650000, Yunnan, China
Abstract. The emergence of every technology will bring a corresponding technological revolution. In the 21st century, the technological revolution led by “big data” has set off a wave of enthusiasm. The application of big data technology in every industry promotes technological innovation in every industry, and college education is no exception. In this context, relying on the theoretical framework of the OBE result-oriented education model, we should actively explore the new university teaching management model that this society needs. With the support of the system analysis method, we have reached the following conclusions: our country’s current university big data teaching management platform has relatively backward information system construction. Only 20% of universities have established scientific research knowledge sharing platforms, and 26% have established scientific research project exchange platforms. Keywords: Big data · College education · OBE theory · Teaching management
1 Introduction After the new century, higher education has become one of our country’s major policies, and the trend of enrollment expansion is obvious. Our country is developing step by step towards the direction of a big country of higher education. At present, the proportion of college students among the citizens is getting higher and higher. The data generated in various fields is gradually increasing, the types of data are also increasing, and the scale of data that people need to deal with is also increasing, and the education industry is no exception. The various and multi-channel data generated in the education field has become an important basis for the comprehensive evaluation of education quality. The storage and analysis of big data has become an indispensable means for the teaching management of universities in my country to inform the development. Gray believes that the educational big data research method does not negate other educational scientific research methods, but centrally processes the data collected by other methods, and uses the educational survey method, educational observation method, educational experimental method, and educational ethnography. Qualitative research © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 619–627, 2022. https://doi.org/10.1007/978-981-16-5857-0_79
620
J. Zhang
methods will collect a lot of information. Educational big data research methods are more about converting these data into data that can be stored and processed, so that we can conduct a more scientific analysis of these data, and perhaps we can still use this data, make a prediction about the development of the research problem [1]. Chandy believes that colleges promote the OBE education philosophy, from “teacher-centered” to “student-centered” thinking. This change will affect the higher requirements of college students on existing managers. Under the background of quality education advocated today, the reform of education and teaching in colleges has become an inevitable trend, and the cultivation of comprehensive quality has been paid more and more attention. The rapid development of Internet technology has also introduced a new perspective to teaching management. Under the guidance of network communication theory, student education management rules and feasibility studies are gradually improving [2]. Wang pointed out that as the backbone of our country’s social development, the emergence of big data technology is of great significance to the development of education management in our country’s universities. However, the educational management of colleges is also facing practical obstacles, such as the need to strengthen the data processing literacy of the managers, build a data management system, the basic data platform is not perfect, the data exchange mechanism is not perfect, and the data security guarantee during the transition period is not mature [3]. Although there are many references for “big data”, “OBE theory” and “teaching management”, these experts and scholars have not done a good job of combining the currently popular big data theory with OBE theory. This article uses the advantages of big data technology such as multi-source, diversity, massiveness, openness, immediacy, interactivity, and individualization, to build a large-scale data platform for colleges and universities, to innovate college education management methods, to improve the utilization of educational resources, and to promote the Development of Educational Management Ideas in Colleges and Universities.
2 Method 2.1 The Concept of Big Data The definition of big data can be divided into two types: the broad definition and the narrow definition: the so-called broad definition of big data refers to a more comprehensive concept, which includes not only data with a large amount of capacity, but also diversity and high speed [4, 5]. In the process of managing big data in a broad sense, it is necessary to have high-end professional technology to store, process and analyze it, and it also needs a professional team of talents to make professional analysis on it, in order to make big data obtain practical value. Big data in a narrow sense refers to data that is different from the traditional hardware environment and required software tools, which can show the user’s collection ability, processing ability and management ability in a specific and acceptable time. To have a deeper and more comprehensive grasp of the connotation of big data, it is necessary to conduct a more systematic and comprehensive analysis from three aspects: The first aspect is the thinking of big data. In the environment of big data, it must Different from the traditional thinking mode in the era of small data, it is no longer purely sampling analysis, causal connection, and the pursuit of precision, but the traditional thinking mode is changed and innovated on
Design of Online OBE Theoretical Knowledge Sharing
621
the basis of the big data environment to make the thinking mode more complex and variable. When analyzing a thing, it no longer extracts some data but analyzes all the data related to the thing; it no longer pursues the accuracy of the process, but recognizes the complexity of the entire process sex; no longer blindly pursue causal relationships that may be difficult to find, but pay more attention to the universal correlation between things. The second aspect is the technology of big data. Big data must have efficient query technology, which can quickly find the data that users need from a large amount of data; big data must have accurate analysis technology to accurately find what users can find data is systematically analyzed to form information that can be used by users; big data must have excellent modeling technology, which can improve the utilization rate of data to a certain extent. The third aspect is the use of big data, which requires both individual optimization and overall optimization [6]. 2.2 The Nature of University Teaching Management The fundamental purpose of educational management in colleges is to enable students to discover, create and experience happiness through education and cultivation, nurture and infusion, training and empowerment, and to make students more outstanding. It can be said that excellence and happiness are all the values of university education management [7]. Colleges are a special social organization that trains high-quality talents who are also in a coordinated development. The key business of colleges has three items: teaching, scientific research and management. The realization of teaching and scientific research goals is inseparable from scientific management, and the realization of educational goals is more distant. Without scientific management, the ultimate good of educational management in colleges is to improve the state and combination of various elements of running a school, and jointly serve to cultivate socially universal talents with rich spirit, noble morality, independent thinking, and goodwill. College education management is coordination. The important methods and means of the relationship between the various elements of colleges, between the internal and external elements, rationally allocate limited resources, make them more environmentally compatible, and achieve the school’s school-running goals. The educational management level of colleges is one of the important indicators to measure the degree of educational modernization. 2.3 OBE Concept Curriculum and teaching under the guidance of this educational concept start from demand, determine what students want to learn, anticipate learning outcomes, locate teaching goals, restructure curriculum content, design and implement teaching processes according to the company’s ability to demand talents [8, 9]. Instructional design has always centered on the student body, focusing on the students’ “what they learn” and “what they have learned” rather than the teacher’s “teaching”, emphasizing the cultivation of the knowledge, abilities and qualities that students really need after graduation. “Student-centered” and “ability-based” are the guiding principles of instructional design based on the OBE concept [10, 11]. OBE’s educational philosophy believes that all students are able to succeed and to provide students with as much opportunities as possible
622
J. Zhang
for their learning. What students learn is far more important than how to learn and when to learn. 2.4 Principal Component Analysis Principal component analysis (PCA) is a linear dimensionality reduction method realized through the idea of unsupervised learning. When dimensionality reduction is achieved, a large number of original high-dimensional data features with a certain correlation are remapped and combined into a set of lower-dimensionality New combination of feature data that are not related to each other [6]. In PCA, through experience and practice, the parameter k is generally set to make η greater than 90%, and η is calculated according to the following formula, so that the low-dimensional space can retain most of the main feature data in the original highdimensional space. k i=1 λi (1) η = min(m,n) λi j=1
2.5 Linear Discriminant Analysis Different from PCA, linear discriminant analysis (LDA) is a dimensionality reduction algorithm with supervised learning. The main idea is to divide the training data into different groups or classes, and then find the projection direction of the largest inter-class distance and the smallest intra-class distance. SW =
k
(ui − xk)(ui − xk)T
(2)
i=1 xk∈classi
SB =
k
ni(ui − u)(ui − u)T
(3)
i=1
Where K is the number of classes, ui is the mean value of the i-th type of samples, and ni is the number of the i-th type of samples.
3 Experiment 3.1 Combination of Conceptual Analysis and Literature Analysis As a new technology, big data has always attracted people’s attention. There are many related new things and new concepts. It is necessary to focus on conceptual research and clarify the differences and connections between closely related concepts. On the basis of conceptual analysis, focus on literature research; comprehensively use the theoretical knowledge of computer science, pedagogy, management and sociology, in order to conduct comprehensive discussion and research on big data in multiple disciplines, multiple fields, and multiple perspectives.
Design of Online OBE Theoretical Knowledge Sharing
623
3.2 System Analysis Method System analysis method refers to the method of studying the problem to be solved as a system. Through a comprehensive analysis of the various components of the system, starting from the whole, focusing on the part, and finally discovering various possible solutions to the problem. When we examine this emerging scientific research method “Long Data Research Method”, there are few related studies. We use systematic analysis methods to accurately diagnose problems, reveal the causes of problems in depth, and effectively explore “educational data research methods”.
4 Discussion 4.1 The Important Role of Big Data Technology in the Modernization of Education Management The application of big data retrieval education methods in student management is based on student data analysis and the management of student data by education administrators (Fig. 1): problem analysis
Asking questions
Student analysis
data mining
Student management
Fig. 1. Educational big data management process
How education managers understand students, some teachers always use inherent or biased viewpoints to define students. How education managers understand students, some teachers always use inherent or biased viewpoints to define students. Every student has his own personality. If researchers want to understand students, they must speak with data. Large-scale research methods can be used to collect students’ family backgrounds, parent’s educational methods, students’ hobbies, students’ hobbies and students’ global vision, life prospects and values. In order to collect this information, most of the information can be obtained from the students’ social platforms. Of course, large-scale methods require us to mine data as a whole, and researchers must also mine student data from various platforms. Through a large amount of data analysis, we can understand the comprehensive information of students, and even predict what methods students will adopt when facing situations or problems. With this information, our management of students will become much easier. 4.2 The Guiding Value of OBE to the Educational Curriculum System of Colleges Because the main idea of college education is whether students can master the ability or quality of the professional field through learning, they attach great importance to the
624
J. Zhang
final results of educational activities, which fits the OBE education philosophy, which focuses on the ability of educational objects and the education based on ability training. Therefore, the introduction of OBE education concept in college education has a very positive effect on improving all aspects of college education. As shown below (Fig. 2):
Output-oriented
Student center
keep improve
Fig. 2. OBE theory display diagram
4.3 Big Data and Higher Education Management Play a Restrictive Role In this highly information-based society, the process of information-based education around the world is steadily advancing. As the most advanced stage in the education system, higher education is a bridge for information teaching. Currently, all universities in China are connected to the Internet (Fig. 3).
University Information System
Information management system
Information scientific research system
86%
98%
95%
Fig. 3. Proportion of various information systems
So far, the development coverage rate of various information systems in various universities has reached 98%, with information management systems accounting for
Design of Online OBE Theoretical Knowledge Sharing
625
95%, and information scientific research systems accounting for 86%. These information systems, especially scientific information systems, will produce a lot of data. Data storage and maintenance require costs. Universities must discover the value of these data in order to achieve the purpose of data collection and recording, balance the cost of data storage and maintenance, and achieve their own development. 4.4 Problems in the Development of Big Data Education Management in My Country’s Colleges Judging from the current status of teaching management, most of the existing business application systems exist independently, it is difficult to realize data sharing and exchange between systems, and massive data cannot be scientifically managed and effectively integrated. The reason is that colleges and universities lack overall planning. When each education management department builds its own information management system, the software systems and data standards used are not unified, forming information islands (Table 1). Table 1. Data sharing situation of major domestic big data centers (partial) Serial number
Name of big data park
Data sharing rate
1
Beijing Yizhuang big data base
19%
2
Shanghai big data New Area
23%
3
Hubei Yichang big data Park
19%
4
Chongqing Xiantao data Valley
19%
5
Heilongjiang Harbin data Valley
20%
6
Xinjiang Urumqi big data Park
20%
7
Fengxi new town, Shaanxi Province
21%
8
Hebei Qinghuangdao data Valley
21%
In this regard, the most important issue of educational informatization for scientific research is the exchange of scientific research data. According to the “Research Report on the Development of Higher Education Informatization” issued by the Ministry of Education, the construction of scientific research information systems in our country’s universities is still lagging behind that of developed countries in the world. Only 20% of universities have established scientific research knowledge exchange platforms, and 26% of universities have established scientific research knowledge exchange platforms for research projects. Only 42% of colleges have established open services that share institutions and equipment. The use of data for analysis and development of network information management systems is also the value of data. Although many results have been achieved in the construction of big data resources, there are still certain shortcomings that require us to slowly explore and discover.
626
J. Zhang
5 Conclusion In the process of improving and innovating the evaluation mechanism of teaching management in colleges and universities, first of all, relevant teachers should change the traditional teaching management concepts and fully respect the individualized development demands and dominant status of college students. In the big data environment, the teaching management of colleges and universities should be transformed and upgraded to a whole new level, and high-end talents should be cultivated in accordance with social demands. Only in this way can the educational advantages be maximized. Secondly, the evaluation mechanism of teaching management should be improved in time to enhance the comprehensive quality of teaching management personnel. Carrying out the peopleoriented thinking, the traditional singular and rough management evaluation standards are continuously rationalized and lean, and then they play a standard role in the current university student group teaching management. University administrators should focus on the basic theory, and continue to learn advanced OBE theory on the basis of grasping the content of the basic theory, so as to realize the proficient application in teaching management. Only in this way, managers can maximize the value of OBE theory in their specific work. In summary, in response to the impact of big data technology on the construction and development of teaching management in higher education institutions in our country, it is still in the process of continuous practice and exploration. This process requires the joint efforts of educators to target the previous internal teaching in colleges and universities. Management deficiencies, combined with OBE theory to better explore solutions.
References 1. Gray, J.W.: Bi-polar: college education and loans to small businesses headed by black females. Rev. Black Polit. Econ. 39(3), 361–371 (2018) 2. Rekh, S., Chandy, A.: Implementation of academia 4.0 for engineering college education. Proc. Comput. Sci. 172, 673–678 (2020) 3. Wang, L.: The summary of the customized training model of college education under the background of the pilot transformation and development of undergraduate universities. USChina Educ. Rev. 10(1), 39–43 (2020) 4. Balon, R.: Why shorten undergraduate medical education over college education? Acad. Med. 96(2), 165 (2021) 5. Gu, D.Y.: Practice teaching reformation of “four segments integration” of marketing based on OBE theory. Design Eng. (1), 20–37 (2020) 6. Sun, X.: The construction of English teaching mode in senior high school based on OBE theory. Overseas English 369(05), 232–233 (2018) 7. Zhang, H.: Research on practical english pedagogy in application-oriented universities based on OBE theory. Educ. Teach. Forum (019), 187–189 (2019) 8. Zhang, L.: “6S” management in the application of the higher vocational practice teaching management% “6S” Management in the application of higher vocational training teaching management. Electron. Qual. (003), 66–67, 71 (2017) 9. Gao, W.: A study of the design of optimizing teaching management system in higher vocational colleges based on TRIZ theory. J. Heihe Univ. 010(002), 12–13 (2019)
Design of Online OBE Theoretical Knowledge Sharing
627
10. He, H.: Research on college students’ participation in teaching management mechanism from the perspective of “student-centered”—take Huainan normal university as an example. 334(04), 178–179+202 (2018) 11. Liang, Z.: Research on cultivating excellent teachers of normal English major under the OBE theory—taking huaiyin normal university as an example. J. Heihe Univ. 009(006), 120–121 (2018)
Intelligent Learning Ecosystem of Information Technology Courses Oriented Skills Training Beibei Cao(B) Shanghai Publishing and Printing College, Shanghai, China [email protected]
Abstract. Applying ecological thinking and methodology to examine the intelligent learning ecosystem of information technology classrooms. This article draws on the principles and methods of ecology, and interprets the current imbalance of information technology courses from four dimensions: goals, roles and relationships, activities and processes, and evaluation. Based on this, four aspects of target positioning, relationship reconstruction, process reconstruction, and evaluation reconstruction are established as the basis for the construction of the smart classroom ecosystem, and it is proposed that the main body of teaching and learning, environment, and resources interconnects the smart classroom ecosystem. ecosystem. It is expected to provide theoretical guidance and practical basis for the reform and innovation of smart classroom teaching. Keywords: Skill development · Information technology · Curriculum · Smart learning · Ecosystem
1 Introduction With the gradual transformation of human knowledge from a one-sided mechanical worldview to a systematic ecological worldview, the concepts, principles and methods of ecology have crossed the scope of disciplines and become “a scientific method of thinking, world view, and methodology”, which in turn gave birth to apply interdisciplinary subjects such as agricultural ecology and social ecology. Since the mid to late 1980s, educational ecology, as an interdisciplinary subject of pedagogy and ecology, has gradually attracted the attention of Chinese scholars [1]. A review of related research found that the original educational ecology was “the science of studying the relationship between education and the overall ecological environment (social, spiritual, natural). “This definition focuses on the discussion of the relationship between education and other fields, but lacks attention to the internal ecological construction of education. Therefore, the study of education ecology “focuses on the development of educational ecosystems. There are many macro-studies that go deep into the micro-level of the school education system. “Less research”. Based on this, Chinese scholars revised the concept of educational ecology and defined educational ecology as “the science of using the principles and methods of ecology to study educational phenomena”. From this point © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 628–634, 2022. https://doi.org/10.1007/978-981-16-5857-0_80
Intelligent Learning Ecosystem of Information Technology
629
on, the study of classroom ecosystem construction from the perspective of educational ecology, it has become a hot area of theoretical research and practical exploration. With the continuous evolution and development of education informatization, information technology has become an important driving force for the reform and innovation of classroom teaching [2]. Therefore, learning from the principles and methods of ecology and exploring the reform of classroom models supported by information technology is important for optimizing the structure and structure of the classroom ecosystem. Function, to promote education and teaching reform and innovation has important theoretical significance and practical value [1].
2 Classroom Interpretation from the Perspective of Ecology Ecology is the science that studies the relationship between organisms and their surrounding environment. Ecosystem refers to a unified whole composed of living things and the environment in a certain space in the natural world. In this unified whole, living things and the environment influence and restrict each other, and are in a relatively stable and dynamic equilibrium state for a certain period of time. In the classroom ecosystem, organisms refer to teachers and students engaged in teaching and learning activities [2]. The surrounding environment mainly refers to the classroom environment, including the physical environment, virtual environment and cultural environment of the classroom. The physical environment of the classroom includes content such as the classroom natural environment, material elements, and spatial layout; the virtual environment mainly refers to the technical environment that provides intelligent support for five major business areas of teaching, learning, testing, management, and evaluation composed of cloud services and learning systems. The cultural environment includes the interactive relationship formed by the interaction between teachers and students, atmosphere, and support services related to teaching and learning [3]. The interrelationships in the classroom ecosystem include the teacher-student relationship, the student-student relationship, and the relationship between the teacherstudent and the classroom environment [2]. The physical environment and virtual environment of the classroom provide a physical place and a virtual space for teachers and students to carry out teaching and learning activities. The form is fixed and not easy to change. Therefore, they do not constitute the main interrelational elements in the classroom ecosystem. The emergence of the cultural environment is the support service that promotes the development of activities during the teaching and learning activities of teachers and students, and the interactive relationship and atmosphere formed therefrom [1]. It is a unity of instrumentality and purpose. Therefore, the relationship in the classroom ecosystem mainly refers to the teacher-student relationship and the student-student relationship. In the natural ecosystem, the reproduction and growth of biological organisms need to be accompanied by material consumption and the flow and conversion of energy. Energy flow is the basis for the survival and development of all living things. Unlike the natural ecosystem where the energy comes from the sun, as an artificially created classroom ecosystem, the only source of energy is the people in the classroom-teachers and students [3]. At the same time, in the classroom ecosystem, teaching and learning
630
B. Cao
activities are the main carriers of the classroom relationship, and the flow and conversion of energy are mainly concentrated in the teaching and learning activities jointly carried out by teachers and students. As a result, the main research fields and hot issues in the perspective of the classroom ecosystem are formed, that is, how to promote the optimization of the structure and function of the classroom ecosystem through the improvement and perfection of teaching and learning activities, and then promote the growth and development of teachers and students [4].
3 Interpretation of the Imbalance in the Classroom Ecosystem In the natural ecosystem, under certain time and space and relatively stable conditions, the structure and function of each part of the ecosystem are in a dynamic state of mutual adaptation and coordination. The difference is that the balance of the classroom ecosystem is influenced and interfered by the talent training goals, and the structure and function of the classroom ecosystem are constantly changing with the development of the times [4]. The ultimate value appeal of education is to realize the all-round development of people. Classroom is the main battlefield for the realization of educational goals and training goals, as well as the main environmental space for the realization of teaching goals and the development of teaching activities. Based on the principles and thoughts of ecology, researchers and practitioners in the field of education should “from a higher level-the level of life, use the concept of dynamic generation to re-understand classroom teaching in a comprehensive way, construct a new concept of classroom teaching, and let The classroom is full of vitality”. 3.1 Goal Fragmentation: The Tension Between Three-Dimensional Goals and Educational Goals In order to change the “dual base” goal orientation of traditional curriculum teaching that only pays attention to basic knowledge and basic skills, the curriculum reform in 2001 clearly put forward three target dimensions of teaching goals, namely knowledge and skills, process and methods, emotional attitudes and values. Among them, knowledge and skills are placed in the first place as an important teaching goal, and the latter two goals are called “process goals”, highlighting the characteristics of student development [5]. The design of the three-dimensional target aims to “make the process of acquiring basic knowledge and basic skills a process for students to learn to learn and form correct values at the same time”. Therefore, scientific and reasonable teaching goal setting should be an organic fusion of three-dimensional goals. However, in the actual teaching process, people often only look for the “three-dimensional target” in the textual knowledge of subject teaching, and separate it mechanically, and there is a phenomenon of good labeling of it [5]. Therefore, the tension between the all-round development of human beings and the three-dimensional goals has forced education managers, researchers and practitioners to think about how the classroom ecosystem should organically integrate the functions of “teaching” and “education”.
Intelligent Learning Ecosystem of Information Technology
631
3.2 Roles and Relationships: The Covariance of Hegemonic Teachers and Silent Students In a lecture-style classroom, the spatial layout of tables and chairs arranged in a seedling field “hidden behind the power structure in the classroom”. Teachers are the masters of power in the classroom, and they control the whole process of teaching goal setting, teaching strategy selection, teaching activity design and organization, and teaching evaluation with a hegemonic mentality [6].
4 The Basis for the Construction of Information Technology Smart Ecosystem System Smart classroom is a teaching and learning ecosystem that integrates data, resources, and activities in an intelligent environment, supports accurate teaching and personalized learning, and focuses on the improvement of students’ core literacy and overall development [6]. The smart classroom ecosystem is an artificial ecosystem. Its fundamental goal is to use classroom reform as a breakthrough and a way to achieve it. It explores the innovative practice of integrating intelligent technology into the main teaching and learning links of teaching, learning, testing, management, and evaluation. 4.1 Target Positioning: Focus on Improving the Core Literacy of Students In line with the worldwide wave of core literacy construction, the education sector in my country takes the cultivation of “all-rounded people” as its core point, and divides core literacy into three aspects: “cultural foundation, independent development, and social participation”, and subdivided into six Great literacy and eighteen basic points. On this basis, establish the core literacy of the subject; enrich it in the curriculum standards of each subject, and enhance the ideological, scientific, integral and operability of the curriculum standards [7]. Disciplinary core literacy is not only the specific discipline implementation and embodiment of core literacy, but also the difference between different disciplines, which is reflected in the “discipline unique value” of student literacy training. 4.2 Reconstruction of Relationship: From Other Organization and Cybernetics to Self-organization and Generative Theory The classroom is an ecosystem constructed of multi-dimensional, interlaced and complex connections. This complexity is reflected in the teaching and learning process of teachers and students, showing the emergence of thinking sparks, the nonlinearity of knowledge connections, and the dynamic generation of cognitive development the characteristics of self-organization of interactive activities between teachers and students are shown in Fig. 1. Based on the methodology of self-organization and the idea of generative theory, students’ learning in teaching activities should complete the transformation from other organizations to self-organization, and the unidimensional and static classroom environment [7].
632
B. Cao
Fig. 1. The classroom is an ecosystem constructed of multi-dimensional, interlaced and complex connections
4.3 Process Reshaping: The Transformation from “Narrative Logic” to “Problem Logic” Curriculum standards and curriculum materials based on the standards are the main text materials for classroom teaching. Behind them are the deep logical basis for the selection of subject knowledge. This selection not only includes the internal structure of subject knowledge, but also embodies the content and knowledge. The systematic presentation is in the form, and this leads to the practical orientation of teaching material content [8]. When this narrative-oriented text material seamlessly matches and links with the teacher’s refined teaching design, it gives birth to the typical logical structure of classroom teaching-narrative logic. Based on the analysis of the text structure, terminology organization and narrative logic of the teaching material, teachers can easily grasp the level and focus of the teaching content. 4.4 Evaluation and Reconstruction: From the Evaluation of Teaching to the Evaluation of Teaching and Learning with Technical Support In the current classroom teaching, the core and focus of teaching evaluation is the teacher’s teaching? The evaluation content includes the advanced nature of teaching concepts, the setting of teaching goals, and the selection of teaching content, the use of teaching media, the control of teaching rhythm, the appropriate teaching methods, teaching effects, and teachers’ personal qualities [8]. Although the diagnosis and feedback on teacher teaching evaluation can be targeted suggestions for teachers to improve teaching, but the core point of teaching evaluation that should ignore the existence of students “reflects a teacher-centered teaching view of imparting knowledge”.
Intelligent Learning Ecosystem of Information Technology
633
5 The Construction of a Smart Classroom Ecosystem System Oriented to Skills Training Based on the above explanations, this research believes that the smart classroom ecosystem is an interconnected ecosystem composed of teaching and learning subjects, environment, and resources [9]. It emphasizes the realization of learning-based teaching in a smart environment, through the improvement of students’ core literacy, Promote all-round development, individual development, independent development, and lifelong development. The model of the smart classroom ecosystem (as shown in Fig. 2).
Fig. 2. Model of smart classroom ecosystem
The subjects of teaching and learning in the smart classroom ecosystem are teachers and students. The supporting environment is a prerequisite for the development of smart classroom activities. It is based on the existing classroom environment of the school, the cloud intelligent learning platform and intelligent hardware technology are used to empower the “classroom” smart attributes, creating a fusion of physical environment, virtual environment, and culture [10]. The environment is one, serving the ubiquitous learning environment of precise teaching by teachers and personalized learning by students. The learning resources of the smart classroom focus on improving the core literacy of students. It is the curriculum introduced in the smart classroom teaching activities and the necessary and direct elements of the implementation of learning activities. It mainly refers to the multi-dimensional resources of each subject that support the achievement of learning goals and tasks, including learning support resources and teaching support resources [9]. The teacher-student relationship, student-student relationship and core literacy in the smart classroom ecosystem need to be established and presented through teaching and learning activities [10]. Teaching and learning activities under the support of intelligent technology have been expanded from offline classroom teaching to a combination of online and offline, from in-class teaching to a combination of in- and extracurricular. Therefore, the teaching and learning activities of the smart classroom closely link the three stages of “focusing on problems before class-internalizing collaboration
634
B. Cao
in class-integrating after class”, and realize “online knowledge” based on the “platform + resources + services” provided by intelligent technology [10].
6 Conclusion The construction of the smart classroom ecosystem model will not only help enrich the theoretical research of smart classrooms, but also help standardize classroom teaching supported by current information technology and provide practical mirrors for the innovation of smart classroom teaching. As we all know, any education and teaching reform cannot be accomplished overnight. Smart classroom teaching reform and innovation also require the full participation and support of the government, enterprises, schools, scientific research institutions, and parents. Only in a benign ecological environment of multi-party cooperation, with wisdom only the wisdom education concept that the classroom is the main position can be implemented, and the goal of promoting education modernization with education informationization can be realized.
References 1. Xu, J.B.: Information technology promotes the informatization of education. J. Nanyang Norm. Univ. 4(08), 90–92 (2015) 2. Dong, W., Li, F., Lv, H.Y.: A preliminary study on the creation of characteristic curriculum of mental health education in primary and secondary schools under the background of information technology. Liaoning Educ. 11(04), 43–45 (2018) 3. Tao, W.J.: Exploration and practice of modeling family education curriculum. Basic Educ. Forum 13, 55–56 (2017) 4. Wei, F.: Analysis of classroom ecological connotation under the background of educational informationization. J. Wuhan Inst. Eng. Technol. 10(01), 124–126 (2019) 5. Peng, L.F.: Integrating information technology to optimize the ecology of the sports classroom. China School Sports 11(05), 210–213 (2020) 6. Wang, X.: Research on the reconstruction of the ecological environment of the “smart” flipped classroom. English Square 9(05), 19–21 (2016) 7. Hu, C.F., Zhao, L.N.: Building a good flipped classroom ecology. Comput. Knowl. Technol. 11(06), 39–41 (2016) 8. Shao, Z.: The value meaning, mechanism and practice path of vocational education blended teaching to achieve “two-line integration.” Chin. Vocat. Tech. Educ. 8(23), 12–17 (2019) 9. Jia, Y., Wei, F.: Reconstruction of higher vocational teaching classroom ecosystem under the background of education informationization. Forum Ind. Technol. 16(07), 62–64 (2013) 10. Yin, D., Tian, J.R.: Research on the dynamic balance mechanism of classroom ecosystem. Educ. Theory Pract. 12(11), 192–194 (2018)
Innovation and Development of Environmental Art Design Thinking Based on Artificial Intelligence in Culture, Form and Function Jing Hu and Ling Fu(B) Gongqingcheng College of Nanchang University, Nanchang, Jiangxi, China
Abstract. With the rapid development of society, technology has gradually penetrated into all aspects of people’s life, and gradually changed the living environment. As a systematic subject that keeps pace with The Times, environmental art design is playing an increasingly important role in theoretical guidance and practical planning in the rational planning and beautification design of interior and exterior surfaces. It has become the trend and development direction of indoor and outdoor space environmental art design, which brings new development and innovation opportunities to the environment. In order to find a suitable way under the trend of intelligence this paper. Based on the trend of intelligent development, this paper compared and analyzed the current situation of environmental art design, and explored the development path of environmental art design innovative. The results show that the environmental art design thinking based on artificial intelligence has a good innovation and development in culture, form and function, and will have a certain development space in the future life. Keywords: Artificial intelligence · Environmental art design thinking · Comparative analysis · Innovation and Development
1 Introduction With the continuous emergence of artificial intelligence, VR and other information technologies, environmental art design not only has physical dimensions, but also can introduce virtual space into reality with the help of virtual reality technology [1].The collision between design thinking and artificial intelligence has changed the original design thinking mode, so the teaching mode should be rearranged to guide the innovation of design thinking. With the development of science and technology, artificial intelligence technology is gradually improved, which can optimize the environmental art for the teaching environment of environmental art design, and the design needs to guide the innovation of design concept and thinking mode. The development of digital technology provides a new development platform for students to create works. At the same time, relying on modern design concepts and digital technology, design thinking and design inspiration have been deeply burst [1].In the face of the influence of artificial intelligence on electric current, in this era, we must think about how to guide students’ thinking innovation, © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 635–642, 2022. https://doi.org/10.1007/978-981-16-5857-0_81
636
J. Hu and L. Fu
stimulate students’ design thinking mode, and enhance students’ core competitiveness [2].Combining the digital age with the environment, spiritual art design solves the human and environmental problems organically. As artificial intelligence is more and more widely used in modern life, it has aroused the interest of many experts, and many teams have also carried out relevant researches. For example, some teams have found that ARTIFICIAL intelligence has become a new engine of economic development, driving a new round of technological and industrial changes. At the same time, the development of artificial intelligence is also facing problems such as overinvestment. Their analysis of artificial intelligence in our country on the basis of the basic situation of industrial development, summarizes the artificial intelligence industry development in our country faces the supply and demand do not match, such as asymmetric information problem, put forward on the basis of developing artificial intelligence, innovation and development of thinking and mode, in the form of “platform + track can’t find the innovation enterprise of traditional enterprises, innovative enterprises to find order, enterprises can’t find good projects problem [2].Some team found environmental art design innovation thinking must grasp the pulse of The Times progress, based on the innovative design ideas of the development of modern society, at the same time, in inherit and carry forward outstanding design works at home and abroad, on the basis of innovative design concept, using the new ideas, new methods, arouse the enthusiasm for the new design, development and innovation of environmental art design pattern with Chinese characteristics, create good development space for environmental art design, enable people to enjoy more rich experience of environmental culture and art, promote the healthy development of the environmental art design. This paper discusses the environmental art design under the concept of innovative thinking [3].Some teams found that emerging technologies represented by artificial intelligence technology brought great changes to the design objects, processes and methods, generated new demands for intelligence, and also gave birth to many new theories, new models, new products, new methods and new formats, which led to the reform of design thinking and method system. This paper summarizes the changes brought by ARTIFICIAL intelligence to the field of design, and discusses the characteristics of innovative design thinking in the era of artificial intelligence from four aspects: system view, identification of design problems, the relationship between design and computing, and how to build a technology cognitive system. Although their research results are fruitful, there are still deficiencies in the investigation of the innovation and development in culture, form and function of the environmental art design thinking based on artificial intelligence [4]. In this paper, based on artificial intelligence in order to study the environmental art design thinking on culture, form and function of innovation and development, mainly from the culture, form, function, these three aspects to carry on the investigation, based on the investigation and analysis of data, got on the function of the development of innovative development compared with the other two more smoothly, in the form of development is more twists and turns.
Innovation and Development of Environmental Art Design
637
2 Method 2.1 Cultural Innovation in Design Thinking In the process of design thinking, regional cultural resources and cultural background should be re-examined, and people’s interest in life should be enhanced through the assistance of artificial intelligence technology, so as to design the environment and products that meet the requirements of cultural creativity [5].Culture, as an important part of the design thinking, materialized in front of people, cultural creative products in line with the local characteristics and people’s spiritual support, not only can be a specific object, you can also through the virtual reality technology and the design of immersion, a specific period of the cultural elements can be show in front of the eyes by way of visual touch, etc. In the digital era, design thinking and cultural innovation, regional display of product forms, art, social status and cultural taste, as well as tactile and perceptive spatial forms enable people to understand culture, learn culture and feel its charm. Cultural innovation design thinking is supported by cultural heritage, which infuses cultural connotation into the design of the digital era and takes the development of culture as the perspective. Data and information bring great convenience to the collection of cultural creativity and diversification of information resources, so as to make the design process more evident. Give full play to the innovation of cultural level, and integrate cultural background into the design and creation. Uncertainty reasoning formula [6]: P(Ai|Bi ) = P(Ai) ∗ P(B|Ai )÷ P(Ai) ∗ P(B|Ai ) I = 1, 2 . . . N. (1)
2.2 Innovation in the Form of Design Thinking In the design thinking process, the form of expression and extraction needs to be changed. In the environment of artificial intelligence, the forms of artistic design are various. The traditional painting exhibition and art exhibition on the train has become a new situation of immersive experience and human-computer interaction. The audience has changed from passive knowledge acquisition to participation, enhancing the sensory experience, experiencing the design concept and artistic creation inspiration at that time, and exploring the beauty of art together with the “wise men”. In the digital age, the extraction form of design should also be reconsidered. The source of form level includes shape, material, pattern, application of form level and other aspects. Different design ideas have different form elements. For example, the design of the Silk Road museum should begin with the color and decoration of the silk, and then match the comic nature of the regional culture with the coke. There are many elements extracted from complex data, but they should be extracted effectively according to the actual situation to achieve the effect of collation. The form level of design thinking innovation, innovation in various forms, artificial intelligence influence design, make art design more specific, comprehensive, in accordance with the formal beauty, but also rich data under the thinking. Both the immersive experience and the extraction of formal elements make the scheme more perfect. Passive to active, one-sided to deep - level, is the meaning of design thinking form innovation.
638
J. Hu and L. Fu
In today’s hot big data era, it is one of the necessary methods for artificial intelligence and machine learning. The famous Bayes formula [7]: P(A|B ) = P(B|A )P(A)÷P(B)
(2)
In this case, in this case, in this case, in this case, in this case, in this case, in this case, in this case, in this case, in this case, in this case, in this case, This is called the total probability formula, which translates into mathematical language as: posterior probability = prior probability * adjustment factor (3) P(B) = P(B/A)P(A)P B/A P A . . . . . .
2.3 Innovation at the Functional Level of Design Thinking In the process of design thinking, assigning function is also an important factor to judge its value in the design process. In an ARTIFICIAL intelligence environment, function is always present in products and Spaces, and it is as important as culture and form to satisfy both functional and aesthetic requirements. In digital, products and Spaces with functions such as age, practicality, convenience, and education are designed so that consumers can evaluate whether they are valuable and worthwhile. Ngonomi, the chief Secretary for Administration and human-computer interaction, crept into our lives. The appearance of smart home has changed the original space design, how to make home more intelligent, become a lot of designers have to think about the problem. Fresh air system changes the problem of poor indoor air circulation, floor heating system changes the problem of insufficient indoor heating, water purification system changes the functional level of design thinking, such as poor water quality. Artificial intelligence solves many problems and changes the way we think about design. Innovative thinking at the functional level breaks the original way of solving problems. Keep pace with The Times, endue product space and new vitality, satisfy the new era’s pursuit and yearning for a better life, and give play to the role of products. Design thinking should be highly forward-looking, so that the function tends to improve. To define or redefine a problem or challenge, here is a viewpoint (PoV) formula [8]: PoV = persona + need + insight In other words, PoV = user portrait + needs + insight.
3 Experiment 3.1 Experimental Data Sources The application of design thinking innovation should be based on the comprehensive ability, burst out a stronger vitality. Environmental art design has been keeping pace with The Times, because of its strong design thinking innovation, constantly stimulate new vitality and vitality. Be curious about the outside world, think of ways to solve problems, and innovation is the core. Environmental art design should not only satisfy the beauty of
Innovation and Development of Environmental Art Design
639
form, but also satisfy the practical function. Design is never armchair strategist, more said than done. Model making can not only improve my hands-on ability, but also enable me to apply my thinking innovation to the model. The innovation of design thinking is not utopian, it must be put into design under the support of sufficient theoretical knowledge. While doing, while thinking, to find the problem, constantly optimize their own design, improve the level of design. In line with the requirements of The Times under the background, of course, innovation. Integrate design thinking again, form colorful design ideas, learn from each other, form a virtuous cycle, get their own design thinking, applied to practice. In this paper, the innovation and development of environmental art design thinking based on artificial intelligence in culture, form and function in the past 30 years were investigated. Questionnaires were conducted in Xi ‘an, Gansu, Jiangsu, Fujian and other places. 302 questionnaires were collected, of which 108 were valid and 194 were invalid, with a recovery rate of 35%. 3.2 Experimental Design Combining design thinking with the characteristics of The Times, in the aspect of innovation of design thinking elements demand, the innovative methods of materialization, interaction, seeking for the opposite sex and leaping are obtained. In the aspect of the guiding strategy of thinking innovation, the guiding strategy of pursuing new interests and different points is obtained. Let the thinking of post-information society and environmental art design combine, try to explore more in line with the development of The Times, so as to change the design into a harmonious and orderly development motive force between human and natural society. In order to avoid the investigation errors caused by regional differences and make the results more accurate, the survey was conducted in multiple locations, not only in southern cities, and the questionnaire was repeatedly deliberated. The period of the survey was not only about one or two years, and then the survey results were visually represented by the line graph.
4 Result 4.1 Innovative Development in Culture Data collection of cultural ideas, the diversification of information resources has brought great convenience, make the design process more evidence, to play to the innovation of the cultural level, blend in culture background design creation, for on the culture of innovation and development, an investigation from 1956 to 2019, used the five copies of questionnaires, using line charts intuitive show their development, the changes of specific results are shown in Fig. 1. As shown in Fig. 1, the development of cultural innovation showed a downward trend from 1956 to 1980, and an upward trend from 1980 to 2019, showing a good development.
J. Hu and L. Fu
(piece)
640
5
4.5
4.3
4.4
3.5
4
2.5
3
number of quesƟonaires
2 1 0 1956
1980
2012
2015
2019 (year)
Fig. 1. The trend chart of the integration of cultural background into design in the past 60 years
4.2 Innovation and Development in Form
(piece)
Both the immersive experience and the extraction of formal elements make the scheme more perfect. Passive to active, one-sided to deep, is the meaning of design thinking form innovation. From 1956 to 2019, a survey was carried out, five questionnaires were used, and a line chart was used to intuitively show their development and changes. The specific results are shown in Fig. 2.
5
4.4
4 3
3.3 2.8
2.4 1.8
2 1
number of quesƟonaires
0 1956
1980
2012
2015
2019 (year)
Fig. 2. Chart of the development trend of innovation in form over the past 60 years
As shown in Fig. 2, the development is not so smooth and always goes through twists and turns. After reaching a peak in 1980, the exploration of innovative development in form gradually showed a downward trend, while it began to recover gradually towards 2015.
Innovation and Development of Environmental Art Design
641
4.3 Functional Innovation and Development
(piece)
Keep pace with The Times, endue product space and new vitality, satisfy the new era’s pursuit and yearning for a better life, and give play to the role of products. Design thinking should be highly forward-looking to improve the function. From 1956 to 2019, a survey was conducted on the innovative development of functions. Six questionnaires were used to express their development and changes visually with a line chart, and the specific results are shown in Fig. 3. 7
6
6
5
5 4 3
3 2
2
1956
1980
number of quesonaires
2 1 0 2012
2015
2019(year)
Fig. 3. Trend chart of functional innovation over the past 60 years
As shown in Fig. 3, the overall trend of functional innovation development is an upward trend, which is relatively smooth from 1956 to 1980 without any significant development. The turning point mainly occurred in 1985 and 2015, and it is believed that it will continue to move forward steadily in the future. The specific results of the overall comparison are shown in Table 1. Table 1. Comparative chart of innovation and development in culture, form and function in the past 60 years Category/Time
In 1956, In 1980, In 2012, In 2015, In 2019
Culture
Rising
Falling
Rising
Rising
Rising
In The Form Of Rising
Rising
Falling
Falling
Rising
Function
Rising
Rising
Rising
Rising
Stable
5 Conclusion Interior environmental design is a new industry under the current environmental background, which represents the progress of The Times and the promotion of people’s
642
J. Hu and L. Fu
aesthetic level. The environmental art design must conform to the aesthetic needs and living habits of human beings, and carry out characteristic design according to the needs of different people, so that the design can reflect the personality elements, and promote the indoor living environment of the people more warm and comfortable. The decoration innovation thinking of interior environment art design must solve the deficiencies and defects in the current decoration process, and find out the targeted solutions through the problems, so that the interior environment art design not only has a strong cultural background characteristics, but also can meet the aesthetic needs of the public.
References 1. Verheij, B., Wiering, M. (eds.): BNAIC 2017. CCIS, vol. 823. Springer, Cham (2018). https:// doi.org/10.1007/978-3-319-76892-2 2. Hosny, A., et al.: Artificial intelligence in radiology. Nat. Rev. Cancer 18(8), 500–510 (2018) 3. By Wagman, M.: Artificial Intelligence and Human Cognition. Quarterly Review of Biology 68.1 (2019) 4. Akihiro: [Artificial Intelligence in Psychiatry]. Brain and nerve = Shinkei Kenkyu no Shinpo (2019) 5. Busquet, F., Vinken, M.: The use of social media in scientific research and creative thinking. Toxicol. Vitro 59, 51–54 (2019) 6. None. “Corrigendum: Enhanced creative thinking under Dopaminergic therapy in Parkinson disease.” Annals of Neurology (2019): N / A-N / A. 7. Copping, A.: Exploring connections between creative thinking and who attaining their. Education (2018) 8. Jiang, S.: A study on the application of ecological concepts in environmental art design. J. Heihe Univ. (2019)
Power Grid Adaptive Security Defense System Based on Artificial Intelligence Lijing Yan(B) , Feng Gao, Yifan Song, and Huichao Liang State Grid Henan Information and Telecommunication Company, Zhengzhou, Henan, China
Abstract. With the increasing production safety requirements of national grid companies, grid enterprises should break away from the traditional safety management model and form a more accurate grid adaptation safety management system. The goal of security defense is the safety accident prevention and prediction power grid enterprises, and implementation measures are put forward and eliminate the potential risk of physical, environmental and safety equipment. Security defense plays a key role in predetermining and controlling risks. This paper mainly studies and analyzes the adaptive security defense system of power grid based on the background of artificial intelligence. Firstly, the significance of power grid adaptive security defense system and the influence of artificial intelligence on power grid adaptive security defense system are expounded. Then it will discuss the method of adaptive security defense system for power grid under the background of artificial intelligence. Finally, the application status of the adaptive security defense system for power grid based on artificial intelligence is investigated, questionnaires are issued, and data are collected and processed. The survey results show that the probability of lightning and typhoon to the artificial intelligence defense system causing equipment failure accounts for 37% respectively, which will enable the power grid adaptive security defense system to operate risk prevention in real time. The influence of network information on the adaptive security defense system of power grid based on artificial intelligence will reach the peak with the perfection of information data and defense system, and then decline. Keywords: Artificial intelligence · Security defense · Defense system
1 Introduction The popularisation and application of the State Grid Corporation of China information building project has improved the level of information management of the energy supply system to some extent, improving the efficiency of security protection and reducing the workload of security personnel. The power grid is an important infrastructure in the country. Advanced power grid construction and the development of automation technology have laid a solid foundation for a stable communication system. Smart Grid is a new power grid that is highly integrated with modern information technology, smart grid and other advanced technologies. Measurement technology, automatic control technology and communication technology, combined with physical network, are the most important network security technology to realize smart power grid. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 643–651, 2022. https://doi.org/10.1007/978-981-16-5857-0_82
644
L. Yan et al.
Under the above background, scholars at home and abroad have made a lot of achievements in the research on the adaptive security defense system of power grid based on artificial intelligence. Zhong Zhichen puts forward a design method of industrial control system based on network traffic data on the basis of analyzing the characteristics of network traffic data of power grid industrial control system. A safety integrated monitoring and warning platform for power grid industry control system was constructed, and the architecture and methods of the safety monitoring and warning platform for detecting power grid current anomalies were proposed. He introduced the power network industrial control system and designed a data acquisition method, using entropy to quantify the characteristic attributes of network traffic, easy to classify. The K-means cluster analysis algorithm focuses on the detection of abnormal network traffic and is optimised to improve the accuracy of detection, to realise comprehensive security monitoring and early warning and to detect potential security risks in a timely manner [1]. Most of the work of safety engineering is focused on the safety requirements in the software system design process. The existing adaptive safety engineering design requires complex design preparation [2]. Ma Djema, M Boudour, K Agbossou, A Cardenas, ML Doumbia studied the improvement of single-phase inverter synchronization technology. The direct power control (DPC) is used to simulate the structure in time domain. An artificial neural network trained by meta-heuristic algorithm is used to control the decoupled active and reactive power. They propose a method for training multilayer perceptron (MLP) using Wolffy optimizer (GWO). This method has good synchronous signal generation ability and smooth power quality, and is suitable for grid-connected and micro-grid (MGS) power system control [3]. In order to ensure the rapid development of social economy and the well-being of the people, it is very important to establish a safe and stable power grid. However, with the increasing pressure of the world’s resources and environment, such as the reduction of resource consumption, environmental protection and social sustainable development, the power industry is demanding more and more. Only in this way can we meet the increasing needs of users. Although large complex Internet can obtain higher resource utilization rate and social benefits, it also has corresponding hidden dangers of information security [4]. Therefore, the adaptive security system based on artificial intelligence can thus provide reliable security for the reliable and stable operation of the power grid. This contribution examines the technical support and security technology required by the system and examines the application status of the system.
2 Research on Power Grid Adaptive Security Defense System Based on Artificial Intelligence 2.1 Research on Power Grid Adaptive Security Defense System Based on Artificial Intelligence Electricity is an important pillar of national and social development and its stable functioning is directly linked to the development and security of the national economy. As the grid grows in size and efficiency, the risk of safe operation of the grid increases rapidly. Once power grid failure occurs, the potential scope or loss will become larger
Power Grid Adaptive Security Defense System
645
and larger. According to the U. S. Department of Energy, electricity demand and consumption in the United States has increased by 2% to 5% annually over the past 20 years. This growing demand and complex network characteristics have led to many network congestion and security issues. With the improvement of fault diagnosis technology and automation degree, the interlocking effect caused by a single network fault is difficult to handle effective communication and monitoring mechanism, which seriously threatens network security [5]. However, the artificial intelligence grid consists of a large number of substation nodes, generating units or other control and monitoring equipment, and the power provided by the generating nodes can be consumed by the node load or injected into the grid for use by other nodes. Smart grid managers are usually installed in regional control centers, whose tasks are to monitor the operation status of the power system, reasonably allocate power resources, and ensure the safe and normal operation of the power grid. Therefore, the adaptive security defense system of power grid plays a vital role in the safe and normal operation of power grid. 2.2 The Influence of Artificial Intelligence on the Adaptive Security Defense System of the Power Grid Due to the high reliability and real-time requirements of grid information, centralized management has become an important feature of artificial intelligence grid information communication. Transmission system is connected with distributed distribution system. There are distributed power supply and high-voltage energy storage devices in power system. The information security of power system directly affects the dispatching center of power system. Artificial intelligence physical security refers to the effective adjustment of communication network to modern substation and transmission network through information control to ensure the information security of communication information system and network monitoring system. Efforts should be made to prevent unauthorized objects from tampering, leaking and damaging information assets, and to ensure that the privacy of smart grid communication information is not compromised [6]. Effectively and quickly prevent unauthorized third party identification and data theft. As the core technology in the security field, the existing cryptography theory can provide theoretical support for the power grid data security. By using cryptography algorithm and protocol, it can solve the basic problems of virus prevention, database security, access control and so on [7]. 2.3 Technical Support of the Power Grid Adaptive Safety Defense System (1) Power grid adaptive security defense system based on multi-agent technology Agent technology provides a reasonable conceptual model to study the characteristics of distributed computing system comprehensively and accurately. The multi-agent system technology mainly studies the coordination and cooperation between multi-agents. In order to improve the reliability and stability of the adaptive protective device, multiagent technology was adopted in the system. It can not only use the changes of physical agent technology at the same time [8]. Therefore, in recent years, it has become a hot spot of study for each expert scholar.
646
L. Yan et al.
(2) Power grid adaptive security defense system based on information security protection technology The relevant departments in China have gradually established the internal information security defense system of the power grid system to avoid information network breakdown, strictly prevent information leakage, eliminate data loss and prevent system damage. In the future, the following points will be achieved: firstly improve and standardize the application of information security system, secondly improve the information network access system and thirdly improve the security of identity authentication scheme to set up the application in the information security system available, then scribing areas to maintain data network and information security classification protection, next establishing information security system to establish a high-level emergency center, finally improve the quality of the internal information security [9, 10].
3 Grid Adaptive Security Defense System Based on Artificial Intelligence (1) Agent technology algorithm The traditional fast power off protection has no action time, so from the perspective of protection selectivity, the setting value of fast power off is set according to the maximum short-circuit current output of the next line, and the traditional current setting algorithm is improved by adaptive fast power off protection. Relay protection setting values vary according to system operation mode and short circuit type. Current setting value formula (1): T=
Srel Sk F Pk + PT
(1)
It can be seen from the above formula that Srel is the reliability coefficient; Sd is the fault type coefficient; F is the phase power supply of the equivalent power supply of the system; P T is the impedance of the protective line; Pk is the equivalent reactance of protection from the installation point to the power supply. The traditional overcurrent protection is designed to avoid the maximum load current, which cannot adapt to the change of operation mode and is not optimal under certain conditions. In order to overcome the shortcomings of the traditional overcurrent protection and improve the setting effect, the adaptive overcurrent protection is set according to the load current. The setting value of the action current is shown in Formula (2): T = Srel Sk SJm Tg /Sre
(2)
It can be seen from the above formula that Srel is the reliability coefficient; Sd is the fault type coefficient; SJm is the motor’s self-starting coefficient; Tg is the current load current; Sre is the return coefficient of the over-current relay.
Power Grid Adaptive Security Defense System
647
(2) Hash function authentication technology Hash function, also known as one-way Hash function or Hash function, is a mathematical method commonly used in many security systems and is usually used for information compression. Based on the authentication technology of Hash function, the authentication code of the message is generated, and the secret key and synchronization code are added to the original message to generate the shared secret key of the authentication scheme of the system. The detailed synchronization code is shown in Formula (3): MACD = G{CB |M |TCG }
(3)
It can be concluded from the above formula that for two different messages M and C, their hash values will not be the same, so even if M has a very small change, the hash value generated by the latter will also have a great change [11, 12]. Therefore, in the adaptive security defense system of power grid based on artificial intelligence, all key information is not transmitted in the form of plain text, so even if there is a threat to intercept the random number in the middle, it is impossible to get a new shared key. This cannot be ignored for artificial intelligence to deal with information security.
4 Application of Grid Adaptive Security Defense System Based on Artificial Intelligence 4.1 Experimental Content The adaptive security system is an important part of the development of the State Grid. In order to find a more suitable security system for artificial intelligence, this paper examines the application of adaptive security systems of artificial intelligence in the power grid. Whether the company has established the artificial intelligence-based, adaptive security system of the power grid and the benefits of the adaptive security system of the power grid. This experiment uses at the same time the questionnaire collection method, online and offline. 4.2 Experimental Process According to the purpose of the experiment to determine the content of the survey, according to the questionnaire design standards to design the questionnaire, to ensure that the survey results are more reasonable and objective. This paper has made a preliminary investigation and discussed the external environment and internal environment of the system respectively. The external environment includes the influence of natural environment on the adaptive security defense system of power grid, and the influence of internal environment including network information on artificial intelligence. According to the preliminary survey results, the questionnaire or other relevant matters are adjusted to give feedback on the application of relevant power grid enterprises. After the final questionnaire is prepared, the investigators are responsible for issuing and collecting questionnaires. 200 questionnaires were sent out and 189 valid questionnaires were received with a recovery rate of 94. 5%. The whole process lasted for one month. The questionnaire data were collected and analyzed to obtain the survey results.
648
L. Yan et al.
5 Analysis of the Application of Grid Adaptive Security Defense System Based on Artificial Intelligence 5.1 Application of Power Grid Adaptive Security Defense System Based on Artificial Intelligence (1) External environment In this survey, a statistical analysis was conducted on the distribution of the actual external environment utilization of electric power enterprises. Among them, electric power enterprises mainly include power grid enterprises and power generation enterprises. As shown in Table 1 for details: Table 1. Power enterprises Power Grid Enterprises
Large Power Generation Enterprises
Local Power Generation Enterprises
2
9
7
China’s electric power enterprises mainly include power grid enterprises and power generation enterprises. In mainland China, there are only two power grid enterprises, namely State Grid and China Southern Power Grid, and there are relatively more power generation enterprises. However, the large generation enterprises are mainly concentrated in the five power generation groups and four small giants. In addition, there are some power construction enterprises, such as Sinohydro and China Electric Power, which are all consulting or power construction enterprises. Among them, the external environment includes lightning disaster, typhoon disaster and hail disaster. The meteorological department can collect conventional weather information and typhoon, ignition point, freezing rain and other destruction information. The electric power department can provide data sources for the evaluation of equipment failure probability caused by continuous disasters, the safety and stability analysis of power grid and the control decision. The artificial intelligence defense system builds a probability and statistics diagram of equipment failure that may be caused by typhoon, lightning, ice and mountain fire, as shown in Fig. 1:
Power Grid Adaptive Security Defense System
649
37% 16%
Typhoon Thunder
37%
Icing
21%
Wildfire 26%
Fig. 1. Probability distribution of equipment failure
As can be seen from Fig. 1, lightning and typhoon will cause equipment failures in a high probability to the artificial intelligence defense system, which will enable the power grid adaptive security defense system to operate risk prevention in real time. (2) Internal environment In this investigation, the first two hours’ messages captured by the data set were selected as the defense information data. Since the information data must meet the condition of no attack data, the defense system did not launch any attacks during this period. Figure 2 shows the data for the different types of cycles observed during this period. The X-axis represents the sequence of observed requests, and the Y-axis represents the time difference between two adjacent similar requests.
Accuracy
Overall 450% 400% 350% 300% 250% 200% 150% 100% 50% 0%
0
0.6
0.8
Type1
1
1.2
Type2
1.6
2.1
Type3
2.3
2.5
3.2
The Rate of Accuracy Fig. 2. Relationship between recognition accuracy of periodic analyzer
We can learn from the Fig. 2, the network information based on the adaptive grid security defense system of artificial intelligence with the improvement of the information
650
L. Yan et al.
data and the influence of sound could reach the top of defense system, then drop, and the security defense system can respond to buffer and denial of service attacks, in order to better identify effective information, Improve the operating efficiency of the adaptive security defense system of power grid.
6 Conclusion This paper mainly studies and analyzes the adaptive power grid security defense system under the artificial intelligence environment. The importance of the adaptive power grid security defense system and the influence of artificial intelligence on the system are expounded. The implementation method of adaptive power grid security defense system under artificial intelligence environment is discussed. Finally, according to the application status of the artificial intelligence adaptive power grid security defense system, the application status of the adaptive power grid security defense system is studied. Through questionnaire survey and data collection and processing, the results show that the probability of equipment failure caused by lightning and typhoon in the adaptive power grid security defense system is 37%. Artificial intelligence enables the adaptive power grid security defense system to run in real time. With the continuous improvement of information data, the influence of network information on the adaptive power grid security defense system based on artificial intelligence will reach the maximum and the defense system will then decrease.
References 1. Zhichen, Z.: Security monitoring technology of power grid industrial control system based on network traffic anomaly detection. Electr. Power Inform. 015(001), 98–102 (2017) 2. Abdelrazek, M., Grundy, J., Ibrahim, A.: Adaptive security for software systems – science direct. Managing Trade-Offs in Adaptable Software Architectures, pp. 99–127 (2017) 3. Djema, M.A., Boudour, M., Agbossou, K., et al.: Adaptive direct power control based on ANN-GWO for grid interactive renewable energy systems with an improved synchronization technique. Int. Trans. Electr. Energy Syst. 29(3), e2766 (2019). 1-e2766. 15 4. Williams, J.: Circular cities: what are the benefits of circular development. Sustainability 13(10), 5725 (2021) 5. Emura, K., Takayasu, A., Watanabe, Y.: Adaptively secure revocable hierarchical IBE from k-linear assumption. Des. Codes Cryptogr. 89(1), 1–40 (2021) 6. Osak, A., Buzina, E.: Analysis of flexibility of power systems as the method for analyzing a power system security in modern conditions. In: E3S Web of Conferences, vol. 216, p. 01026 (2020) 7. Huang, Q., Huang, R., Hao, W., et al.: Adaptive power system emergency control using deep reinforcement learning. IEEE Trans. Smart Grid 11(2), 1171–1182 (2020) 8. Qu, L., Wang, C., Zhang, J., et al.: Research and application of power grid intelligent inspection management system based on physical ID. In: E3S Web of Conferences, vol. 257, no. 2, p. 01027 (2021) 9. Jeavons, A.: What is artificial intelligence. Res. World 2017(65), 75 (2017) 10. Hassabis, D., Kumaran, D., Summerfield, C., et al.: Neuroscience-inspired artificial intelligence. Neuron 95(2), 245–258 (2017)
Power Grid Adaptive Security Defense System
651
11. Matej, M., et al.: DeepStack: expert-level artificial intelligence in no-limit poker. Science 356(6337), 508 (2017) 12. Krittanawong, C., Zhang, H.J., Wang, Z., et al.: Artificial intelligence in precision cardiovascular medicine. J. Am. Coll. Cardiol. 69(21), 2657 (2017)
Innovative Mode and Effective Path of Artificial Intelligence and Big Data to Promote Rural Poverty Alleviation Jie Su(B) , Xiaoxiao Wei, Lingyi Yin, and Jingmeng Dong Haojing College of Shaanxi University of Science and Technology, Xi’an, Shaanxi, China
Abstract. Poverty is still the primary problem of the country, because it is not only related to the development of poor villages, but also related to the long-term security of the country. Although after decades of exploration in poverty alleviation, China’s poverty alleviation work has finally made good achievements, the poor population is constantly reducing, but poverty alleviation is still facing many problems. This paper mainly studies the innovative mode and effective path of artificial intelligence and big data to boost rural poverty alleviation. This paper uses big data and artificial intelligence to construct a set of multi-dimensional information system for poverty alleviation objects, and carries out coding and mathematical modeling for the information of poor households. The experiment shows that the algorithm in this paper can accurately identify the supporting measures required by targeted poverty alleviation groups, which is more convincing and scientific basis compared with the manual recommendation and independent selection currently implemented. Keywords: Artificial intelligence · Neural networks · Big data · Rural poverty alleviation
1 Introduction As a kind of public policy, urban and rural planning has the function of gradually implementing the government or regional development goals, usually industrial adjustment, tourism development, supporting public services, environmental ecological protection, etc., through various ways to stimulate and guide market construction behaviors. The state implements the strategy of targeted poverty alleviation, and proposes to explore the theoretical system and strategic methods of targeted poverty alleviation with Chinese and local characteristics, so as to solve the backward development status of poor areas [1]. The existing urban and rural planning system in China is dominated by types and patterns, based on the planning tasks of the lower level and the planning tasks of the higher level, and the development objectives are controlled and decomposed at different levels, thus an efficient planning compilation and management system has been established. It meets the needs of China’s rapid economic development, and to some extent makes a certain contribution to the rapid development of China’s urbanization. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 652–659, 2022. https://doi.org/10.1007/978-981-16-5857-0_83
Innovative Mode and Effective Path of Artificial Intelligence
653
But this type of mode of urban and rural planning content, especially the town overall planning and village planning major role is to control and land development and construction, research and development problems of poor areas lack of guidance, lack of market research perspectives, leads to the planning results have largely not adaptability, planning the implementation of difficult reality problems. The main point of the western theory about information technology and poverty alleviation is to reduce the poverty population by using modern information technology. The Independent Commission on World Telecommunication Development believes that less developed countries should speed up the construction of basic networks and realize the implementation of networked poverty alleviation work [2]. Three countries, the United States, Japan and the United Kingdom, have applied information technology to practical work for a long time, and achieved good results in helping poor areas [3]. In this paper, big data and artificial intelligence are used to drive poverty alleviation through innovation in poverty-stricken rural areas. Only by knowing the data resource coordination mechanism of poverty alleviation in poverty-stricken rural areas can we win the battle of poverty alleviation fundamentally, so as to provide intellectual support with research value for the innovation and development of poverty alleviation mode under the information mode.
2 Application of Big Data and Artificial Intelligence in Rural Poverty Alleviation 2.1 Influence of Big Data in Rural Targeted Poverty Alleviation (1) Poverty identification is more accurate Through the establishment of a big data platform, relevant information such as the per capita net income of the poor population can be directly input into the system. In the process of targeted poverty alleviation, relevant data and information can be updated in a timely manner to improve the efficiency of information transmission and optimize the accuracy. At the same time, the relevant departments of poverty alleviation can implement the data platform sharing, so that the policy of targeted poverty alleviation to households can be more effectively implemented, the changes of the poor objects can be updated from time to time according to the data, and the next targeted poverty alleviation measures can benefit the poor population [4, 5]. (2) Better analysis of the causes of poverty After the data of poor households enter the platform, it can be analyzed with the help of big data model, and the real situation behind the data can be understood in a more in-depth way [6]. Some people lose the ability to work because of the disease, so there is no economic income, at the same time the disease has been consuming the family’s money, thus causing poverty; in some families, the cost of educating several children is too much, and the couple’s income (labor income and farming income) is not enough to sustain these expenses, and sometimes they need to borrow money to cover these expenses. The heavy burden on the family is the main cause of such poverty. Therefore, when analyzing these reasons with big data, we should first find out the main cause, and then find out the secondary cause, and
654
J. Su et al.
at the same time predict some aspects that may lead to the return to poverty in the future. In this way, we can have a clearer understanding of the situation of each household, which is more conducive to helping the poor people out of poverty [7]. (3) More reasonable resource integration After the introduction of big data, data sharing can be realized, which can be seen not only by relevant government departments, but also by other social organizations or enterprises, so that they can understand the situation of the region and help according to the characteristics and expertise of their own organizations, so as to realize the poverty alleviation scene of multi-party joint efforts [8]. The participation of social organizations in poverty alleviation can improve the deficiencies of the government in poverty alleviation, and also help the government to view poverty from a different perspective. The government plays a leading role in targeted poverty alleviation. First of all, it should do its own job well and invest funds and services to help the poor people out of poverty. At the same time, it should think carefully in industrial planning, project design, capital investment and other aspects, so as to ensure that all the assistance projects are feasible and can achieve corresponding results. Secondly, we should do a good job in other services to provide policy support and convenient conditions for social organizations to participate in poverty alleviation, so that social organizations can carry out their work smoothly. Finally, the government should include the third party evaluation organization in the project effect evaluation, so that the project implementation effect can be assessed fairly and openly, and the chaotic project implementation, no interference in the later stage of the project, the project effect is not ideal, etc., can be punished to improve the project effectiveness. Avoid resource waste and project progress is not effective. (4) More scientific fund supervision The big data platform includes every project implemented and supervises the progress of the project, so as to clearly understand the use of project funds and ensure the appropriation of special funds. In addition, there are signatures and information of supervisors or receiver-signers at each step, so that when there are problems in the project, when the operation process appears opaque, clandestinely operated, embezzlement and so on, the person in charge of the corresponding procedure can be found in time for investigation; at the same time can also be retained for record, in order to prevent future problems for accountability. Such fund supervision can prevent some problems in the use of funds, so that the poverty alleviation funds can be used to help the poor, and play the role of poverty eradication and economic development. 2.2 Intelligent Recommendation Algorithm Based on Neural Network (1) BP neural network There are various types of artificial neural networks, among which BP neural network has great advantages compared with other neural networks and has become one of the most popular neural network models currently. The main advantages are as follows:
Innovative Mode and Effective Path of Artificial Intelligence
655
The structure of BP neural network is simple. The neurons of each layer are fully connected, while the neurons of the same layer are not connected. The multi-layer structure enables it to realize complex mining and training tasks and extract more effective information from the input signals [9]. BP neural network has good performance and strong non-linear mapping ability. Without prior knowledge of the distribution types and correlations among variables that the data obey, it can handle dynamic situations through training functions such as function approximation, data compression and data fitting to find the mapping relationship between input signals and output signals [10]. BP neural network is a typical multi-layer forward feedback neural network, which is trained by error back propagation algorithm. Its training process belongs to teacher training, and the activation function of neurons is S-shaped function, which can improve the accuracy of training and effectively meet the marketing needs of real estate enterprises [11, 12]. Artificial neural network models have reached more than 30 kinds, such as Hopfield neural network, Fukushima neural network, BP neural network, etc., among which BP neural network is the most widely used. Abundant research on the application of BP neural network, the communication and signal processing, pattern recognition, etc.), electronic (automatic control and fault diagnosis, etc.), Banks (customer recognition and credit risk prediction, etc.), electronic commerce (customer value classification, sales forecast and knowledge management, etc.), port freight (operational risk prevention and control), and other industries have made good progress in study, it provides technical and decision support for enterprise operation management, risk management and performance optimization. (1) Implementation steps of the algorithm Input layer vector X = (X1, X2, X3…, Xj), the hidden layer input vector I = (I1, I2, I3…, In), the output vector of hidden layer is O = (O1, O2, O3… Hidden layer neurons, On), Wij said I and j the connection weights between input layer neurons, Vi said neurons in hidden layer and output layer neuron I m connection weights between the hidden layer neuron threshold for theta I (the initial neural network weights and threshold is any given), tp is the desired output, yp is network actual output values, p for sample size. Calculate the input values of each neuron in the hidden layer: Ii =
n
wij xj + θi
(1)
j=1
The output of each neuron in the hidden layer was calculated according to the excitation function (Sigmoid function: the nonlinear action function of neurons was selected in this paper): Oi =
1 1 + e−1
(2)
656
J. Su et al.
Calculate the output of the neural network according to the excitation function (linear function selected in this paper): y=
m
vi oi
(3)
i=1
Use the error function to calculate the mean square error of the network: 1 (tp − yp )2 2 m
E(w) =
(4)
k⊂p
The error terms of each neuron in the output layer and the hidden layer and the correction values of each connection weight were calculated, and the weight of the connection between neurons in each layer was adjusted. (2) Neural network construction The neural network toolbox in MATLAB is used to construct a pattern recognition neural network algorithm. First of all, the input of the neural network pattern recognition algorithm is defined as {age, whether there are children, whether there are students in school, whether there are dropouts, whether the members are healthy, whether the members are disabled, whether there is a labor force, the attributes of poor households, the causes of poverty}. Finally identify the target = {help policy}. Next, the input data and output data are respectively used as the input neurons and output neurons of the network. The third step is to carry out verification and testing after the completion of training. In this process, all data should be guaranteed to carry out the same pre-processing operation, and the test should be carried out in the same effective range. Finally, the training data is selected to account for 60% of all the sample data, the other 10% is used as the convergence mark of the whole training data, and the remaining 30% is used as the test data. This can ensure the validity of the test of the whole algorithm. Because the training data and the test data are different data, the effectiveness of the final algorithm is guaranteed.
3 Simulation Experiment 3.1 Data Sources The research data in this paper are provided by the poverty alleviation and development team of this book, which are derived from the business management subsystem of the national poverty alleviation and development information system. 3.2 Model Error As shown in Table 1, the accuracy of the verification data and the test data were calculated by comparing and analyzing the predicted type of assistance policies output by the neural network model with the actual type of assistance policies.
Innovative Mode and Effective Path of Artificial Intelligence
657
Table 1. Correctness rate Totality
Correct number
Wrong number
Correct
Validation data
100
93
7
93%
Test data
300
249
51
83%
It can be seen from the accuracy rate of output results that when the network performance reaches the optimal state, the accuracy rate of verification data reaches 93%, but the accuracy rate of test data does not reach more than 90%. This is not because the training effect is not good or the algorithm itself is defective, but there may be several special reasons for the sample. One is that the actually selected helping policy itself is not the best recommendation. The intelligent recommendation method obtained after machine learning is more appropriate than the actually selected helping policy. The other is that the attributes of the poor households do not describe the life of the poor families in a very comprehensive and detailed way, so the help policy intelligent recommendation based on machine learning is also appropriate for the existing information of the poor households, that is, both the actual help policy and the intelligent recommendation policy are applicable to some poor families. Therefore, when calculating the accuracy of the algorithm in this paper, these two situations must be added into the calculation of the overall accuracy of the algorithm.
4 Evaluation and Analysis of the Intelligent Recommendation Model
Value
4.1 Model Evaluation
100% 0% class1
class2
class3
class4
Actual class class1
class2
class3
class4
Fig. 1. Test the data obfuscation matrix
As shown in Fig. 1, this paper is to adopt a similar binary classification of confusion matrix evaluation model, this paper respectively have four classes in the output of the classification, so confused confusion matrix of the output of 4 * 4 matrix, respectively is the actual classification and prediction classification all matching, also expressed to each forecasting classification results of the ratio of the actual classification.
658
J. Su et al.
4.2 Model Analysis
Class 1
Class 2
90% 86%
94%
100%
80%
80%
60%
60%
Value
Value
100%
40% 20%
0%
6%
0%1%
10% 5%
class2
class3
40% 20%
2% 1% class1
class4
Test data
class3
class4
Test data
Class 4 95%
100%
99%
86%
80%
54% 15% 1%
0%
15%
30% 0%
Value
Value
class2
Validation data
Class 3 120% 100% 80% 60% 40% 20% 0%
1%2%
Predict class
Predict class Validation data
1%0%
0%
0% class1
92%
60% 40% 20%
9%5%
0%0%
1%3%
class1
class2
class3
0%
class1
class2
class3
Predict class Validation data
Test data
class4
class4
Predict class Validation data
Test data
Fig. 2. Prediction class accuracy distribution histogram
As shown in Fig. 2, basically the validation data match actual classification and prediction of classification accuracy is high, the classification of network effects to achieve the best, but, in the test data in the actual category 3 is the education student support policy, predict class matching accuracy is low, the reason is that there are poor because of poverty, was the main cause of fact, most of the cases of poverty alleviation through education aid are patients with diseases or have part of the labor force. In the current society with universal education, and the family planning is generally no more than three children, the real cause of poverty is not the most important factor. As a result, more poor households classified as 3 do not match through the intelligent recommendation results of the network. This situation requires local staff to conduct a household check and survey to confirm the final recommended policy.
5 Conclusions The use of big data and artificial intelligence can provide technical support and methods for targeted poverty alleviation that are not available in poverty alleviation, make
Innovative Mode and Effective Path of Artificial Intelligence
659
targeted poverty alleviation more accurate, improve the living standard of the poor, help them to increase their income and become rich, and set a model for the international community, and provide effective solutions to the poverty problems of other countries. This chapter in this paper by using neural network pattern recognition based on Matlab toolbox for poor families the support policy and its information items to predict actual anti-poverty policy has carried on the detailed analysis of the experimental results show that the neural network pattern recognition algorithm in poor information of the matching recognition and the support policy has higher accuracy, it is helpful for relevant government departments and poverty alleviation workers to recommend policies to help the poor households who have not been lifted out of poverty, and can provide reference basis for grassroots staff to improve the efficiency of poverty alleviation work.
References 1. Gao, Y.: Technology contributes to poverty alleviation in Hainan. China Today v.67(12), 70–72 (2018) 2. Mo, G., Zhang, Y., Min, L., et al.: New approaches to targeted poverty alleviation in the age of big data——on improving the results of targeted poverty alleviation programs (10). Contemp. Soc. Sci. 11(03), 68–80 (2018) 3. Lan, X.: Internet plus helps with poverty alleviation. Beijing Rev. 32(v.60), 50–50 (2017) 4. Wang, Y., Qi, W.: Multidimensional spatiotemporal evolution detection on China’s rural poverty alleviation. J. Geogr. Syst. 23(1), 63–96 (2021) 5. Yang, X., Huang, F.: Path analysis of forest carbon sequestration on poverty alleviation papermaking company innovation based on big data analysis. Paper Asia 35(1), 28–32 (2019) 6. Nie, Y., Zhang, Y.: Research on financial precision poverty alleviation in Hebei based on big data technology support. Revista de la Facultad de Ingenieria 32(9), 466–471 (2017) 7. Song, J.: Value, characteristics and innovation direction of poverty alleviation by enterprises in the era of mass entrepreneurship and innovation. Asian Agric. Res. 10(10), 15–23 (2018) 8. Fan, B.: Design and analysis of a rural accurate poverty alleviation platform based on big data. Intell. Autom. Soft Comput. 26(3), 549–555 (2020) 9. Lu, H., Setiono, R., Liu, H.: Effective data mining using neural networks. Knowl. Data Eng. IEEE Trans. 8(6), 957–961 (2016) 10. Ganin, Y., Ustinova, E., Ajakan, H., et al.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17(1), 2096–2030 (2017) 11. Peng, Z., Dan, W., Wang, J.: Neural network-based adaptive dynamic surface control for a class of uncertain nonlinear systems in strict-feedback form. IEEE Trans. Neural Netw. 28(9), 2156–2167 (2017) 12. Goh, A.T.C.: Seismic liquefaction potential assessed by neural networks. Environ. Earth Sci. 76(9), 1467–1480 (2017)
The Intelligent Service Mode of University Library Based on Internet Yan Zhang(B) Jilin Engineering Normal University, Changchun, Jilin, China
Abstract. With the widespread application of big data, library services play an important role in college education and talent training, but there are also some problems. This article mainly explores the service mode of university libraries under the current big data background. First, it discusses the current situation and shortcomings of library services in the big database environment from the perspectives of document retrieval and information resource management; finally, a new service model for university readers based on the era of big data is proposed: establishing a library service quality management platform and using big data technology to improve personalized service. In addition, this article investigates the specific situation of the library service quality of a university through experiments, analyzes the reader’s evaluation of various aspects of library services, analyzes the qualification and excellence of library services, and finds that the service quality of the library is at a low level, and it is urgent to improve the service level. Keywords: University library · Big data · Service model · Innovation
1 Introduction With the rapid development of big data technology and the application and popularization of mobile Internet technology, the channels and methods for readers to obtain information have become more and more diversified, and the scale of data obtained has become larger. According to the report of the 13th National Citizens Reading Survey, the proportion of the public using new media tools such as smartphones and computers to obtain information has exceeded 50%. From the survey report, we can learn that in the field of information services, libraries are no longer the only or most important information service organization for the public to obtain information resources, and their status in the information service industry has been severely impacted. University library is an important place for students to acquire knowledge and study. Doing a good job in modern information construction of the library has a huge role in improving and optimizing library services and improving students’ learning enthusiasm. Today, when we speak with data, big data technology It is the innovation direction of university library service reform. Li S, Hao Z, Ding L and others put forward and analyzed the three most important modern big data information technology concepts, content and interrelationships, thus © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 660–668, 2022. https://doi.org/10.1007/978-981-16-5857-0_84
The Intelligent Service Mode of University Library
661
completing the analysis of the status quo of the application of information technology in Chinese digital libraries. They pointed out that blockchain can achieve more accurate information collection, safer information storage and more effective information dissemination. Artificial intelligence can improve the service level of existing digital libraries from three aspects: resource construction, information organization and information services. “Internet +” will help transform the traditional digital library business model to adapt to changes in user-centric services [1]. Gong R, Yu K, Tang H and others studied the innovative service and relationship quality of university libraries based on statistical education under big data. Through questionnaire surveys, they concluded that statistical education affects innovation services, innovative services affect relationship quality, and statistical education affects the quality of innovation services. The conclusion that relationship quality has a significant positive effect. In addition, they put forward some suggestions based on the results, hoping to make the library continue to create innovative management and services, promote users to use library resources, and integrate the system to optimize the library management process [2]. Yin X, Zhang G, Ji X and others proposed and implemented a digital library cloud service platform based on big data using OpenStack and OSGI technology, which realized dynamic and unified service management and dynamic combination, and realized physical resource scheduling and management as well as multiple User management service [3]. As the government and university administrative departments pay more attention to local university libraries, the service quality of local university libraries is continuously improved and improved, but there are still many problems that need attention and need to be improved. The reader’s traditional model has problems such as backward service content and insufficient resource information management capabilities [4]. In order to achieve high-efficiency and practical service goals, innovation is needed. This article will start with the research and analysis of effective reform measures for university librarians in the big data environment and give suggestions for reference. It is expected to provide help to enhance readers’ interest in reading and promote the library’s usefulness. The magic weapon of combining sustainable development strategy and technological innovation practice.
2 Research on the Service Mode of University Library Under Big Data 2.1 Analysis of the Status Quo of Big Data Application Services in University Libraries Educational resources and scientific research resources in universities continue to grow rapidly. In which direction the management system of university digital libraries should be established, the management of various resources in the university will be more comprehensive and efficient, and readers will be provided with faster and more convenient query services and download services. The main problems faced are as follows: (1) There are differences in service targets. Emphasis on resource sharing, but the service objects of university digital libraries generally only face the readers of the
662
(2)
(3)
(4)
(5)
Y. Zhang
hospital, and there is a certain gap with local universities in terms of openness and sharing [5]. The conversion and integration of heterogeneous data. How to automatically or manually completely transform and integrate existing data with different structures to realize the co-construction, co-management and sharing of data [6]. Storage problem. The scale of electronic resources and data is getting larger and larger, and the storage and security of small files are becoming more and more prominent. For example, e-book resources have the characteristics of large storage scale, long storage period and uneven access frequency [7]. The problem of retrieval efficiency. The increase of resources results in slower retrieval efficiency, and the retrieval system is relatively simple, and many resources cannot be retrieved. The search engine of university digital library seldom considers the difference between different readers. No matter what kind of readers search for the same keywords, the results will be the same, lacking in personalization. At the same time, the system lacks intelligence, fails to effectively feedback readers’ problems, and cannot re-cluster and combine the search results according to the needs of readers. In addition, the data update speed is relatively slow, and the dynamic real-time update of information cannot be achieved, which cannot guarantee the safety, efficiency, reliability and economy of the retrieval engine [8]. The problem of personalized recommendation system. The intelligent recommendation system is not perfect enough, and the current university digital libraries cannot well meet the individual needs of readers. With the increasing digitization and networking of university library resources, the scope of readers’ use of information continues to expand, and the information available for selection is becoming more and more complex. At the same time, the information data required by readers gradually tends to be personalized, which is a new requirement that readers put forward to university libraries under the new situation [9].
2.2 The Innovation Countermeasures of University Library Service Mode Under Big Data Resume Library Service Quality Management System. Local university libraries should build on the actual situation of the library, draw on the successful experience of domestic and foreign libraries, establish a set of quality management system in line with the conditions of the library, change the traditional library management concept, and truly establish a “reader-centered, service-oriented approach”. Regarding the improvement of library service quality as a goal pursued by the library, pre-selecting a reference target, actively conducting library-related investigations, and understanding readers’ needs through observation, reader exchange meetings, new book recommendation meetings, and statistics on borrowing conditions. Discover the problems in library services and explore the causes of the problems, and formulate plans to improve service quality according to the needs of readers and existing problems. Set up relevant leading groups, train librarians, implement innovative plans to improve the quality of librarians, enhance the competence of librarians, mobilize the enthusiasm of employees, build efficient teams, break down the original inter-departmental barriers, and unite and cooperate with all departments. To jointly solve the service quality problems in or between
The Intelligent Service Mode of University Library
663
departments reported by readers, we must always adhere to the principle of “reading for readers” in the process. In accordance with the pre-designated service quality improvement plan, check the improvement effect, such as the improvement and acceptance of library hardware facilities, whether the new book purchase plan is implemented, etc., timely discover problems in the implementation of the plan, sum up experience, and lay a foundation for the next step of work Good foundation. In response to the new problems arising in the improvement process, we will continue to make targeted improvements, propose better plans to improve service quality, continue to improve service quality, form a virtuous circle, and bring library service quality to a new level [10–12]. Strengthen the Construction of Personalized Services. Personalized service refers to the traditional information service, combined with the characteristics of readers’ needs in the Internet environment, and ultimately provide readers with more differentiated and personalized services. This is an innovative service oriented to readers’ needs, which is a kind of initiative. Providing readers with a more personalized environment to meet their information needs is the core content of information services tending to develop vertically and horizontally. The main modes of personalized service include three modes: personalized page customization service, information push service and academic information navigation service. The information push service refers to the application of push technology to promote more personalized information provided to customers. For example, by analyzing readers’ borrowing book information and browsing information, establishing association rules for the characteristics of readers’ related behaviors, establishing a reader’s information database, and finally realizing a recommendation service oriented to readers’ needs. From the original passive search for information to active search for information, in order to better meet the reader’s demand for information. This kind of service includes general personalized information service, which belongs to a service mode that provides personalized information service to customers based on the integration of information resources. Strengthen Policy Support and Improve Service Level. The basic condition level of local university libraries is to a certain extent restricted by the school-running level. Key universities directly under ministries and commissions are usually high-level and highly positioned. They follow the footsteps of the country’s “world-class disciplines and worldclass universities”. The development of libraries is closely followed by the development of colleges and universities, and thus obtains more financial support. A good development platform and financial support also make it easier for libraries to attract high-level professional talents. However, local colleges and universities are mostly positioned as teaching-oriented and technology-oriented universities, and they have relatively little funding support. Relatively small funding support and relatively low development platforms have weakened the attractiveness of local university libraries to high-level visits. Therefore, the development level of libraries is relatively low, so the government should increase funding support and provide policy support to local universities.
664
Y. Zhang
3 Survey on the Service Quality of University Libraries Under Big Data 3.1 Experimental Content Whether the service quality evaluation system of local university libraries can truly reflect the feelings and expectations of readers needs on-site evaluation and analysis. This study selected a certain university library in our city to conduct service quality survey and evaluation. The university library provides services such as teaching, scientific research, and literature resources for the teachers and students of the school. Students and teachers are the main service objects. Therefore, the survey objects selected in this article are mainly current teachers and students in the school. This experiment uses a questionnaire survey method to conduct the survey. The content of the survey includes readers’ evaluation of various aspects of the school library’s services. For objective questions, different points are set for each different option, and subjective questions are filled in the reader’s service to the school library. 3.2 Experimental Process The survey adopted a combination of online and offline survey methods. The questionnaires were distributed by the investigators in the library classrooms and reading rooms, or they were filled out online. A total of 437 questionnaires were distributed in this experiment, and 389 valid questionnaires were returned. The questionnaire response rate was 89%. The duration of the survey was 5 days, and then the collected questionnaires were collected and analyzed. 3.3 Calculation Formula When calculating the reader satisfaction of each individual item, the following formula is used: (1) Si Sj = 1 n n is the number of questionnaires returned, and S i is the evaluation score of the i-th table. Calculate the comprehensive satisfaction of readers based on individual reader satisfaction, the formula is as follows: (2) S= λj Sj S j is the reader satisfaction of the jth item, and λj is the weighting coefficient of the jth item.
The Intelligent Service Mode of University Library
665
Table 1. Readers’ minimum acceptable value of each evaluation index, the average value of actual feeling value and ideal expected value Acceptable lowest value
Actual experience value
Ideal expectation value
Librarian team
3.16
3.36
4.65
Facility environment
3.54
3.13
4.8
Collection resources
3.53
3.6
4.79
Service effect
3.26
3.39
4.85
4 Analysis of the Survey Results of University Library Service Quality Under Big Data 4.1 Analysis of Readers’ Expectations and Actual Feelings of Library Services (1) Service qualification analysis Service qualification = actual feeling value-the lowest acceptable value. Service qualification: the degree to which the reader’s perceived service quality exceeds the acceptable minimum value. If it is a positive value, it means that the service is qualified; if it is a negative value, it means that the reader’s actual experience is poor and needs improvement; below them the minimum acceptable service quality requires immediate improvement. As shown in Fig. 1, there are still a lot of negative values in the qualification degree, indicating that the school’s service construction in related aspects urgently needs to be strengthened (Table 1).
(2) Analysis of service excellence Service excellence = actual feeling value-ideal expectation value. According to the reader satisfaction theory, reader satisfaction = reader’s perceived service effect-reader expectation. In this article, service excellence = reader satisfaction. Service excellence refers to the degree to which readers’ perception of service quality is higher than the ideal expectation. If it is positive Value, it means that the service experienced by readers exceeds their expected service quality, indicating that the service quality is very good, but the results of this evaluation are all negative, it shows that the library readers believe that the quality of library service is lower than their acceptable minimum service quality, and they have high expectations for the improvement of library service quality. The smaller the service excellence, the larger the gap with readers’ expectations. It can be seen from Fig. 2 that readers have great expectations for the library’s services, indicating that readers have a particularly urgent demand for the internal infrastructure of the library, and the per capita building area and flexible configuration of the library can no longer meet the needs of readers and readers’ requirements for the professionalism of librarians are constantly increasing. Libraries should strengthen on-the-job professional training for their staff and provide readers with more professional services.
666
Y. Zhang
Qualification
Qualification
0.5 0.4 0.36
0.4
0.32 0.3 0.17
0.2 0.11
Difference
0.1
0.05 -0.02
0 -0.1 -0.1
0.04
-0.1
-0.09
-0.15
-0.2
-0.26 -0.3
-0.31
-0.3 -0.38 -0.4 -0.5
Refer to the Title number
Fig. 1. Qualification distribution map
-0.4
The Intelligent Service Mode of University Library
667
Excellence -0.8
-1
-1.05
-1.1
-1.06
-1.21-1.19
Difference
-1.2
-1.4
-1.3
-1.3
-1.35
-1.42-1.43 -1.5 -1.59
-1.6 -1.6
-1.7 -1.8
-1.8
-1.9 -2
-2
-2.2
Refer to the Title number Fig. 2. Excellence distribution map
5 Conclusion Libraries have always been an important base for the dissemination of cultural knowledge and are responsible for the important social value of serving readers and users. In particular, university libraries, as a teaching-assisted scientific research institution, carry the responsibilities of teaching, educating, and serving. With the in-depth development of big data technology, the construction of intelligent library services will be more deeply rooted in the hearts of the people. University libraries are affected by many technical factors. It is explored that university library service models based on big data are both theoretical, academic and practical. It is imperative to build an informatized and intelligent library in colleges and universities, and create tailor-made, personalized, intelligent and intelligent services for readers and users.
References 1. Li, S., Hao, Z., Ding, L., et al.: Research on the application of information technology of Big Data in Chinese digital library. Libr. Manag. 40(8/9), 518–531 (2019)
668
Y. Zhang
2. Gong, R., Yu, K., Tang, H.: Based on statistical education to study innovative service and relationship quality of university library under big data. Eurasia J. Math. Sci. Technol. Educ. 14(6), 2419–2425 (2018) 3. Yin, X., Zhang, G., Ji, X., et al.: Design and implementation of a big-data-based university library cloud service platform. C e Ca 42(6), 2463–2468 (2017) 4. Hao, W.: Personalized information service system of a library under the big data environment. Agro Food Ind. Hi Tech 28(1), 1701–1704 (2017) 5. Kumar, N., Priyadarsini, P.: Revealing library statistics with big data expertise: a review. Int. J. Pharm. Technol. 8(4), 20783–20789 (2016) 6. Wei, Q., Yang, Y.: WeChat Library: a new mode of mobile library service. Electron. Libr. 35(1), 198–208 (2017) 7. Town, S., Bracke, P.: Social networks and relational capital in library service assessment. Perform. Meas. Met. 17(2), 134–141 (2016) 8. Omeluzor, S.U., Oyovwe-Tinuoye, G.O.: Assessing the adoption and use of integrated library systems (ILS) for library service provision in academic libraries in Edo and Delta states Nigeria. Libr. Rev. 65(8–9), 578–592 (2016) 9. Keisling, B.L., Sproles, C.: Reviewing and reforming library service points: lessons in review and planning services, building layout, and organisational culture. Libr. Manag. 38(8–9), 00–00 (2017) 10. Kouis, D., Agiorgitis, G.: Library service platforms (LSPs) characteristics classification and importance ranking through Delphi method application. Int. Inf. Libr. Rev. 4, 1–15 (2020) 11. Kumar, A., Mahajan, P.: Evaluating library service quality of University of Kashmir: a LibQUAL+ survey. Perform. Meas. Met. 20(1), 60–71 (2019) 12. Fard, M., Ishihara, T., Inooka, H.: The Japan society of mechanical engineers NII-electronic library service. JSME Int J. Ser. C 46(1), 116–122 (2019)
Analysis on the Current Situation of Intelligent Informatization Construction in University Library Min Zhang(B) Jilin Engineering Normal University, Changchun, Jilin, China
Abstract. With the development of informatization, information resources are playing an increasingly important role in today’s society, and the construction of information resources has become very urgent and necessary. University library information resources are an important part of information resources, and it is also very necessary to study the current situation of university library information construction. This article first studies the current domestic and foreign research on the information construction of university libraries, then analyzes the problems existing in the information resource construction of my country’s university libraries, and proposes improvement measures. Finally, the annual funding of four university libraries in China Contrast and analyze the evaluation of the library environment by college students. The results show that our country can make more reasonable allocations for the annual funding of university libraries, and there is still room for improvement in the environment of university libraries. Keywords: Information resource · University library · Information resource construction · Annual funding
1 Introduction As human society enters the information age, information resources are playing an increasingly important role in economic and social development, and information resources have become the basic resources of today’s society [1]. The importance of the development and utilization of information resources lies in the continuous use of modern information technology to equip various sectors of the national economy and various sectors of society, which can effectively reduce material and energy consumption and expand its role. As a result, the productivity of social work has been greatly increased, and it has contributed to the realization of the sustainable development of the national economy. University library information is a part of university information resources. Due to its disciplinary development characteristics and the dependence of teachers and students on information, its integration and effective use have become an indispensable part of the construction of university information resources [2]. As the storage center of university educational resources, the library is a bridge connecting users and information resources. One of the functions of the library is to meet the differentiated © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 669–677, 2022. https://doi.org/10.1007/978-981-16-5857-0_85
670
M. Zhang
information needs of users, and to provide information guarantee for the learning and academic research of teachers and students. University libraries need to know the needs of users in the service model, and integrate and process various information resources so that the information resources can add value. Libraries can add value to information in many ways. The library should establish a standardized system to screen and evaluate educational resources, and choose a scientific and reasonable method of organizing resources to ensure that users can find resources in a timely and effective manner [3]. Chinese scholar Zhao Hongbo and others believe that with the popularization of the Internet, the development of various industries has explored the development direction of informatization with the arrival of the information age. As a place that undertakes the important tasks of organizing, storing, and transmitting teaching and scientific literature resources, the university library is not only the key for college students to obtain rich learning information resources, but also the main position of social spiritual civilization and cultural communication services under the background of the Internet, University libraries should fully integrate the development characteristics of the information age, strengthen information construction, and innovate development paths to drive the sustainable development of the two functions of university library education and information services [4]. Yang Shuqin believes that the construction and service of information resources has always been the core and key to the work of university libraries. The optimization and integration of resources is the prerequisite and basis for the library to provide information services for readers, and to give full play to the advantages of resources to provide users with knowledge service is always the goal and pursuit of university libraries, and the basic purpose and core task of the library [5]. Jiang Mei believes that now is the era of information technology, and the methods of disseminating information through computer networks are becoming more and more extensive. As the hub of talent training, universities should also do a good job in the construction of library information services and innovative service models, strive to cultivate more outstanding talents for the society, and make university services more modern [6]. In theory, the academic information resources provided by university libraries are high-level and high-quality academic information resources that have been strictly screened by relevant staff and information professionals. Of course, it will become the preferred source of information for professional scholars [7]. But that is not the case. Currently, more than 80% of college students and researchers still choose traditional popular search engine websites instead of specialized academic websites when searching for relevant academic information materials. The main reason is that the information organization method of academic websites is relatively rigid, and the comprehensive method has a specific set of specifications; the recovery interface is not friendly enough, and the system design is too professional in order to be professional; retrieval skills emphasize professionalism, which is not conducive to the passage of time User proficiency. In other words, the academic and rigorous nature of the construction of university library information websites has increased the difficulty of users, and due to the lack of corresponding retrieval skills, many users have to stop using it. All of these have led to the abuse and click-through rate of the current university library information website, making the library information port unable to play its due role, and restricting the development of university library information websites.
Analysis on the Current Situation of Intelligent
671
2 Analysis on the Current Situation of Information Construction in University Library 2.1 The Purpose of Research This article mainly focuses on the current situation of domestic university library information construction, through in-depth analysis of the characteristics of university library information construction, to understand the status quo of our country’s university library information construction and the problems existing in the construction process, and proposes to promote our country’s university books Improvement measures for library information construction [8]. The purpose of this article is to study the empirical theories that can help the information construction of university libraries. In terms of academic significance, this article mainly analyzes the current situation of the information construction of academic libraries at home and abroad, and conducts in-depth analysis of high-quality research results. Have studied in detail the current situation of the information construction of domestic university libraries, know the characteristics and problems of the information construction of university libraries, find out the shortcomings, find the correct steps, find the direction of improvement and development, and propose to improve the information resource construction of university libraries Strategy. 2.2 Research Status at Home and Abroad From an international point of view, developed countries have taken the lead in the world in terms of the construction of the information resource guarantee system and the development and utilization of university libraries. A typical example is the United States. The United States is the most developed country in science, technology and economy in the world. As early as the last century, the United States paid great attention to the accumulation and allocation of information resources in academic libraries. They have a huge information resource guarantee system for academic libraries. The United States has promoted the development of higher education and education industry by leaps and bounds based on the advantages of its own university library information resources, and has become a world-recognized large and powerful country in higher education [9]. In addition, all powerful countries in the world have strong advantages in the information resources of university libraries. The experience of these developed countries in the construction of information resources in university libraries is worthy of our consideration and reference in the process of development. At present, the information resources of university libraries all over the world tend to be electronic, networked, and threedimensional. The traditional paper-based university library information resources are being replaced by advanced digital information resources [10]. Due to various reasons, the information resources of academic libraries in our country have been in a relatively backward state. At present, the medical information resource system of academic libraries in our country is mainly based on paper information resources, and network information resources are in the initial stage of construction. The overall construction of the information resources of our country’s university libraries is in a stage of continuous enrichment and improvement. The reality is that my country currently lacks a distinctive university education database that has a certain status in the world. The allocation of
672
M. Zhang
information resources in our university libraries is still It is not systematic, scientific, and perfect enough, and there is no dedicated and authoritative leading organization to do a comprehensive work on the construction of the information resource guarantee system of university libraries. 2.3 Problems Existing in the Construction of Information Resources in Chinese Academic Libraries In the construction of library information resources in my country, there is a problem of uneven investment in library funds. For example, in 2020, Sun Yat-sen University and Hunan University have reached an astonishing gap of more than 100 million in total library funds. This is still two domestic libraries. The gap between the total library funds of key universities can be imagined in terms of how big the gap between ordinary universities and key universities is [11]. In the construction of library information resources in our country, there is still a problem that the input of library digital information resources is relatively small. Nowadays, university libraries are mostly paper-based information resources, and the proportion of digital information resources is extremely scarce. Not only that, The electronic network level of university libraries is not high. At the level of information resource retrieval, readers have a general grasp of retrieval methods, which will restrict students and teachers from using the information resources of our university libraries. In addition, the statistics of the utilization rate of electronic resources show that teachers and students have low utilization rate of electronic resources, which also reflects the low level of information retrieval of readers to a certain extent [12]. The quality of the information environment directly affects the satisfaction of teachers and students’ information needs and the utilization of information resources. The information environment of most university libraries in our country is ordinary, which can only meet part of the information needs of some teachers and students, and the information environment of the reader is relatively poor.
3 Improvement Measures for the Information Construction of University Library 3.1 Change Mode and Rational Layout In our country, the construction of information resources in university libraries is based on a unique project model. The shortcomings of using this model to construct the information resources of university libraries are very obvious. For example, the amount of library funds is certain, the continuous maintenance and development of the library cannot be effectively solved, and the information construction of various university libraries is compatible. In addition, the projects of each school are generally independent, and it is difficult for relevant departments to plan related projects in a reasonable and orderly manner. This has also led to the phenomenon of repeated construction in the information construction process of many university libraries. Repeated and repeated construction will lead to a great waste of human, financial, and material resources, and is not conducive to the scientific distribution of domestic higher education resources. Therefore,
Analysis on the Current Situation of Intelligent
673
we must gradually change this traditional and unreasonable construction model in the future. In the process of change, relevant national education departments and cultural departments need to take the lead to rationally arrange and coordinate the information construction of all domestic colleges and universities, and guide and organize the cooperation and sharing construction between relevant colleges and universities. It is necessary to grasp the overall situation of the construction as a whole, and coordinate the reasonable and scientific distribution of the information resources of university libraries across the country, so that the information construction of university libraries can well meet the needs of teachers and students, and finally realize that all university libraries built have high-quality educational information resources. 3.2 Unified Construction Standards By comparing the analysis of the status quo of the information construction of academic libraries at home and abroad, it is not difficult to find that the current information systems of academic libraries in our country have not formed a unified standard in terms of the amount of resources or the types of resources. Due to the absence of corresponding norms and standards, the information construction of university libraries is self-contained, and it is difficult to realize the sharing of resources, both now and in the future. Therefore, the relevant education and cultural departments of our country must form a unified university library information construction standard, and ensure the operability and implementability of the normative standards in the system. Relevant institutions can refer to the relatively mature classification technology and research results currently developed. The information construction of university libraries should consider the resources of higher education and combine the current mature and commonly used classification technology to construct a scientific, scientific and technological basis for library information construction. Reasonable and easy-to-use classification system. We can learn from some advanced foreign experience. When classifying educational resources, we use the traditional classification system as the basis, and on the basis of the traditional classification method, we can formulate a feasible network resource based on the characteristics of the current network resources themselves. To formulate an authoritative classification system standard, it also requires the guidance of relevant departments and cannot do things independently. Otherwise, there is no corresponding guarantee system for the implementation of the standard, and it will be difficult to maintain for a long time in practice. 3.3 Optimize Search Function On the whole, the search function of the information construction of the domestic university library is not perfect, so the information construction of the university library should optimize the search function. Provide keyword-based search, the system can automatically search or assist in the selection of search terms. Keyword-based retrieval can improve the accuracy and completeness of user retrieval, and at the same time, it is also an important means to realize intelligent retrieval. Some well-known foreign university library information systems generally use mature search vocabularies to index and organize information, and provide vocabulary-based retrieval, which can effectively realize
674
M. Zhang
the intelligent and integrated retrieval of university library information systems. Domestically, foreign experience can be used for reference, and existing mature keywords can be used to index information. In addition to providing traditional search functions in the portal. In the process of information construction of university libraries, the extensive information distributed on the Internet should be considered, and the development of the public retrieval function of online resources should be emphasized to meet the convenience of users. This is because when users use the library information system to search for resources, they may use the information on the Internet to strengthen their understanding, so they need to go to the Internet to search for information. If the university library information system does not provide search tools for network resources, users need to log in and use the Internet to search for relevant information. The user’s switching between different systems brings inconvenience to the user’s use, and also wastes the user’s time. Therefore, the university library information system should provide users with convenient and practical retrieval tools based on Internet resources.
4 Analysis of the Data Results of University Library Information Construction 4.1 A Comparative Analysis of the Annual Funds of Four University Libraries in Jiangxi Province in 2020
Table 1. Comparison of annual total funds of university libraries School
The total annual funding of the library in 2020 (10,000 yuan)
Nanchang University
2545.3
Jiangxi Normal University
1855.5
Nanchang Aeronautical University
1279.8
Jiujiang University
706.5
According to Table 1 and Fig. 1, it can be known that the highest annual library budget in 2020 is Nanchang University’s 2545.3 million, followed by Jiangxi Normal University with 1855.5 million, Nanchang Hangkong University with 1279.8 million, and Jiujiang University with only 7.65 million. yuan. Nanchang University is a 211 key university, Jiangxi Normal University and Nanchang Hangkong University are first-class schools, and Jiujiang University is a second-class school. Therefore, in order to better develop the information construction of university libraries, the state must increase the budget allocation for libraries of ordinary universities.
total annual expenditure (ten thousand yuan)
Analysis on the Current Situation of Intelligent
675
comparison of annual total funds of university library 2545.3 1855.5 1279.8 NANCHANG UNIVERSITY
JIANGXI NORMAL NANCHANG UNIVERSITY AERONAUTICAL UNIVERSITY
706.5 JIUJIANG UNIVERSITY
university name Fig. 1. Comparison of annual total funding of university libraries
4.2 Analysis on the Evaluation of University Library Environment by University Students
Table 2. Evaluation of the library environment by college students Student evaluation Percentage Very satisfied
23%
Relatively satisfied 46% General satisfied
22%
Dissatisfied
9%
According to Table 2 and Fig. 2, we can know that 23% of college students gave a very satisfactory evaluation of the library environment, and 46% of college students gave a relatively satisfactory evaluation. 23% of the students gave a generally satisfactory evaluation, and 9% of the students gave a dissatisfied evaluation. Although 9% of the students who gave unsatisfactory evaluations accounted for 9% of them, the proportion was not large but it should not be ignored. Only 23% of the students gave very satisfactory evaluations, and most of the students gave relatively satisfactory evaluations. It shows that the information construction of our country’s university libraries has not yet achieved a very good level, so we must increase the improvement of the environment of university libraries to provide a better learning environment for teachers and students.
676
M. Zhang
collegestudents’evaluationoflibrary environment 9%
Verysatisfied Relativelysatisfied
23% 22%
Generallysatisfied Dissatisfied
46%
Fig. 2. Evaluation of the library environment by college students
5 Conclusions With the advent of the global information age, the country has incorporated information resources into strategic resources, and the construction of information resources has become an important part of building the national economy. The development of information technology is inseparable from information resources. Whether in the development direction of information technology or in the development of the national economy, information resources play a very important role. Not only that, information resources also have a huge impact on people’s cultural qualities and innovation capabilities. National information resources include university information resources. Due to the characteristics of discipline development and the needs of teachers and students for information, the integration and effective use of information has become a necessary part of resource construction. The survey results show that there are still many areas that need to be improved and improved in our country’s university library business as a whole. The unreasonable annual total funding of libraries is not conducive to the development of our country’s university library information construction, and the environment of university libraries still has much room for improvement..
References 1. Zhao, Y., Ma, X., Zhang, M.: Research on the service quality improvement of tibetan university libraries introducing information think tanks. Int. Public Relat. 92(08), 140–141 (2019) 2. Qing, C., Hui, L.: Construction of emergency service information platform in university library. Chinese J. Med. Libr. Inf. 29(08), 15–19 (2020) 3. Juan, S.: The practical obstacles and improvement strategies of the informatization construction of university libraries. Nongjia Staff 607(02), 185 (2019) 4. Zhao, H., Luo, L., Wang, Y.: Analysis of the information construction of university libraries under the internet background. Urban Constr. Arch. 248(05), 37–38 (2020)
Analysis on the Current Situation of Intelligent
677
5. Yang, S.: The subject construction and open sharing of university library resources. Office Autom. 423(10), 52+58–60 (2020). v.25 6. Jiang, M.: Information construction and innovation service of university library. Lantai Inside Outside 284(11), 52–53 (2020) 7. Bin, F.: Problems and countermeasures in the information construction of university libraries. Inf. Rec. Mater. 21(03), 52–53 (2020) 8. Zou, Y., Sun, H.: Feasibility study on building a service platform for government information disclosure in university libraries. Economist 365(07), 215–216 (2019) 9. Cui, B.: The status quo, obstacles and strategy analysis of the MOOC construction of information literacy in university libraries. Libr. Work Res. 289(03), 116–122 (2020) 10. Sun, P.: Cultural construction and information services of university libraries. Think Tank Times 192(24), 160+166 (2019) 11. Ruiming, B.: Correlation analysis between the construction of digital information resources in college libraries and the cultivation of college students’ innovative ability. Educ. Mod. 6(38), 211–212 (2019) 12. Yan, S., Zhang, Y.: Construction and utilization of university library resources in the information age. Heilongjiang Sci. 174(11), 128–129 (2020). v.11
Design and Implementation of Auxiliary Platform for College Students’ Sports Concept Learning Based on Intelligent System Bo He and Juan Zhong(B) Kunming University, Kunming 650500, Yunnan, China
Abstract. With the development of the times, big data technology has been applied to all fields of society. This article discusses the feasibility of a university sports concept learning auxiliary platform based on big data technology. In this paper, four classes are selected for comparative experiments, the teaching methods of two classes are not changed, and the other two classes introduce the sports concept learning auxiliary platform into physical education. Through experiments, students were scored in terms of sports participation, physical health, and sports in terms of skills, mental health and social adaptation. The results showed that after the experiment, the average scores of the two control groups were 76.2 and 76.8, and the average scores of the two experimental groups were 87 and 86.6, respectively. It can be seen that the physical education concept learning auxiliary platform can achieve good results in physical education. In addition, this article also conducted expert interviews to explore the path to the construction of university sports concept teaching. The results show that under the current situation, sports concepts should be changed, emphasizing the inheritance and innovation of traditional sports culture, improving the level of sports teachers and students, and the gymnasium should manage the exhibition space. It should be designed reasonably. Keywords: Big data technology · Intelligent system · Sports concept · Teaching aid platform
1 Introduction In the new era, people have also put forward higher requirements for education, requiring colleges and universities to no longer focus solely on the transfer of theoretical knowledge, but more on students’ practical ability, innovative spirit, and emotional attitude, and establish students’ lifelong sports ideology [1]. Constrained by the limitations of physical education class hours, there are many sports-related concepts that cannot be well transmitted to students. In addition, the school does not pay attention to the physical education of non-professional students. The effect of college physical education has not only failed to achieve the effect of moral education, even the most basic His physical fitness is not up to standard [2]. Since the advent of big data technology, it has been © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 678–686, 2022. https://doi.org/10.1007/978-981-16-5857-0_86
Design and Implementation of Auxiliary Platform for College Students’
679
regarded as a new direction for industry reform in various fields, and the education field is no exception. There have been many successful cases so far [3, 4]. The development of university public physical education curriculum plays an important role in promoting the physical health of students, which helps to cultivate students’ lifelong exercise awareness and improve the quality of their will [5]. However, the traditional public sports curriculum is deeply influenced by the concept of competitive sports, focusing on the cultivation of competitive sports skills, while ignoring the ethical function of sports, which runs counter to the educational purpose of the overall development of students in the concept [6]. Due to the new requirements of sports talents in the new era, physical education must carry the teaching of sports concepts, and cultivate comprehensive sports talents with sportsmanship, competition spirit and high level of sports [7, 8]. Therefore, the sports concept learning auxiliary platform is of great significance to the innovative development of physical education and to improve the comprehensive level of college students’ sports. This paper studies the design and realization of the auxiliary platform for university sports concept learning. In the research, we first explained the characteristics of big data and related technologies, and analyzed the current problems of college sports. In order to verify the effect of physical education, this article explores the effect of physical education concept learning on physical education. This document also conducts expert interviews, classifies and classifies the results of the interviews, and proves how to do a good job of physical education concept teaching.
2 A Support Platform for College Students’ Sports Concept Learning Based on Big Data Technology 2.1 Big Data Technology The key technologies of big data include information collection technology, distributed storage system, data mining technology, data warehouse, cloud computing technology, etc. [9]. With the rapid development of the mobile Internet and the Internet of Things, the behavior of each individual is constantly generating new data, and as the format of information presentation becomes more and more diversified, the types of data are becoming more and more abundant, not just text data also includes video, audio, pictures, etc. For these complex and large-scale data, image recognition technology, sensor technology and mobile terminal technology are the main methods of data acquisition [10]. Distributed storage systems and data warehouses not only expand the amount of data storage, but can also store non-relational data, which caters to the changing characteristics of big data [11]. The speed of big data update is fast and the value density is low. To obtain useful information, it is often necessary to calculate massive amounts of data. Using distributed architecture and cloud servers, connecting multiple computers can achieve accurate calculation of massive amounts of data [12]. 2.2 The Status Quo of College Physical Education Concept Teaching Judging from the practice of college physical education in recent years, some physical education teachers have a relatively simple understanding of the transmission of college
680
B. He and J. Zhong
physical education curriculum concepts, and they practice physical education concepts in actual teaching. Generally, college physical education usually focuses on improving students’ sports skills, but the concepts conveyed in sports are very few, which leads to students’ learning of physical education courses only at the level of physical exercise and mastery of sports skills. Although this is the core task of the teaching of physical education, the sports concept covers the spirit of sports and competition, and it plays a very important role in improving the quality of sports talents. At present, the physical education courses for non-professional students in colleges and universities are only 2 h a week. The physical education teacher needs to organize students to prepare activities, consolidate the learning content of the previous lesson, learn new sports items and practice, the other time of the students is busy with professional study, and there is really not enough time in terms of time. Most of the majors of physical education teachers are sports majors, and they are not good at concept teaching. The communication with students before, after and during the break time has nothing to do with the concept of physical education. Until it is resolved, physical education has also been stagnant. 2.3 Auxiliary Platform for Sports Concept Learning Functional System Requirements. (1) After the user logs in, the user information is displayed and the user information is automatically verified. According to different user identities, enter different interfaces and assign different permissions. Students and teachers can log in to the system through student ID, teacher ID and password respectively. (2) After the administrator logs in, he can query, delete and modify semester information, course information, student information, and teacher information. (3) Through the teaching platform, students can understand the content of the course, select the course and view their own course selection. During the learning process, students can discuss difficult problems with teachers and classmates in the Q&A discussion area. Through the teaching platform, students can view and complete the homework released by the teacher, and can also download related teaching resources through the platform. (4) Through the teaching platform, teachers can check the students’ selection of courses, so as to arrange teaching plans in a targeted manner. Teaching information or resources and published assignments are archived in the database, and complete assignments and questions are also stored in the database, which is convenient for teachers to correct. Teachers can publish student results through the teaching platform. You can also upload teaching resources and courseware through the teaching platform. (5) Through system analysis, the teaching situation can be observed, counted and analyzed from different levels and angles. Thereby, potential problems in teaching work can be discovered in time, and they can be solved or improved.
Design and Implementation of Auxiliary Platform for College Students’
681
Design Points Real-Time Communication. After the user logs in, he first selects the course he wants to communicate with. Each course corresponds to a Q&A room, and then enters the room. The system executes different functions according to the different user identities of teachers and students. After logging in, students can ask someone or everyone, and they can also participate in the discussion group in the room to discuss different topics. Teachers are managers in the Q&A room and can perform more functions, such as answering questions, organizing Q&A information, and managing the Q&A room. For a representative question, teachers can add the discussion and answer results of the question to their favorites for future teaching. In this way, students can effectively grasp the problem, and where they are likely to have doubts. Resource Sharing. The sharing of teaching resources enables students and teachers to act as resource users and at the same time as resource providers. Teachers also need to maintain some resources. Teachers and students can query the published teaching resources within the scope of this course. The maintenance of teaching resources realizes the method of hierarchical management by role. First of all, the role of providing resources can be either teachers or students. After the students submit the resources, they can also delete the resources and apply to submit them to the public resource library. The teacher reviews the part of the application provided by the students to be added to the public resource library, and the approved resources are released to the public resource library of the course, so that teachers and students who need the resource can query this part of the resource. In addition to reviewing the tasks and permissions of the resources submitted by the study, teachers also undertake the task of maintaining the public teaching resources of this course. As managers, teachers have the greatest authority to maintain the teaching resource library.
3 Experimental Design 3.1 Theme This article selects four classes of students from our school as the research object. The number of students in these four classes is 25, and the basic situation of these four classes is roughly the same. Set two classes as experimental classes, and set the other two classes as control classes. 3.2 Experimental Method The experimental class adopts university physical education courses and the supporting platform of physical education concept learning, while the control class adopts traditional physical education teaching methods. The experimental period is one semester. 3.3 Evaluation Index After the experiment, the students in the four classes were scored, and the score for each indicator was 100.
682
B. He and J. Zhong
3.4 Questionnaire Survey With reference to related materials, such as the scale of students’ sports values and the scale of students’ social adaptation, a questionnaire was created for five evaluation indicators. Before and after the experiment, the subjects were investigated. The questionnaire was distributed on site and restored on site. A total of 100 questionnaires were distributed and 100 valid questionnaires were retrieved. The actual percentage is 100%. This process uses the following formula: Sample variance formula: n (xi − x¯ )2 2 (1) s = i=1 n−1 Sample standard deviation formula: √ s = s2 =
n
i=1 (xi
− x¯ )2 n−1
(2)
3.5 Expert Interview At the end of the experiment, we interviewed relevant experts on the subject of “Sports Concept Learning” in the university, compiled the interview records, and analyzed and discussed the results.
4 Analysis and Discussion of Research Results 4.1 Analysis of the Application Effect of the Sports Concept Learning Auxiliary Platform in University Sports Before the experiment, a questionnaire survey was conducted among the four classes of students. The results of the five evaluation indicators are shown in Fig. 1. It can be seen from Fig. 1 that before the experiment, the highest score of the five indicators in the control group 1 was 71 points for physical health, the lowest score for motor skills was 65 points, and the average score was 67.6 points. Among the five indicators of the control group, the highest score was 72 points for physical health, the lowest score for motor skills was 65 points, and the average score was 67.4 points. In the experimental class, the highest score of the five indicators in experimental class 1 is 73 points for physical health, and the lowest score is motor skills and participation. Both are 65 points, and the average score is 67.2 points. The highest score of the five indicators in Experiment 2 is 72 points for physical health, the lowest score is 65 points for motor skills, and the average score is 67.8 points. It can be seen that before the experiment, there is no significant difference in the scores of the five indicators of the students in the four classes, and the influence caused by the different foundations of the students in the four classes can be excluded. At the end of the experiment, four classes were evaluated again, and the results of the five indicators are shown in Fig. 2.
Design and Implementation of Auxiliary Platform for College Students’
683
75 70 65 Score
Control class1 Control class2 Experimental class1 Experimental class2
60 55 Sports Good health Motor skills Mental Participation health Evaluating indicator
Social adaptation
Average
Fig. 1. The results of the evaluation indexes of the four classes before the experiment
Control class1 100 80 60
74 72
83 82
Control class2
78 79
Experimental class1 90 88
87 87
86 85 74 76
75 76
Experimental class2 89 90 80
8786.6 76.8 76.2
Score
61 40 20 0 Sports Good health Motor skills Participation
Mental health
Social adaptation
Average
Evaluating indicator Fig. 2. The results of the evaluation indexes of the four classes after the experiment
It can be seen from Fig. 2 that after experiments, although the scores of each evaluation index of the control group have been improved, the extent of improvement is not very large. Among them, the sports participation scores of the two control groups were 74 and 72 points, the physical fitness index was 78 and 79 points, and the sports skill evaluation scores were 74 and 76 points. The average scores of the five evaluation indicators of the two control groups were 76.2 and 76.8 respectively. In the evaluation index of sports participation, the evaluation score of experiment 1 class is 83, and the evaluation score of experiment 2 is 82. In the evaluation index of physical health, the evaluation score of experiment 1 class was 86, and that of experiment 2 was 85. In the evaluation index of motor skills, the evaluation score of experimental class 1 is 87, and the evaluation score of experimental class 2 is 87. In the evaluation index of mental health, the score of experiment 1 was 90, and the score of experiment 2 was 88. In the evaluation index of social adaptation, the evaluation score of experiment 1 class is 89,
684
B. He and J. Zhong
the average score of experiment 2 is 90, the average score of experiment 1 is 87, and the average score of experiment 2 is 86.6. Therefore, the application of the sports concept learning support platform to university physical education can effectively increase students’ sports participation rate, improve their physical and mental health, sports skills and social adaptability. 4.2 An Analysis of the Path to the Construction of College Physical Education Concept Teaching By categorizing the results of expert interviews, the path of learning the concept of university sports can be obtained. The results are shown in Table 1 and Fig. 3. Table 1. Expert interview results Path measures
Proportion
Improving the sports concept quality of Physical Education Teachers
30%
Inheritance and innovation of traditional sports culture
Pay attention to the development of traditional sports
12%
Integrating innovative elements
11%
Optimizing the teaching mode of Physical Education
Implement team competition and other participatory practice teaching
6%
Clubs, clubs, sports teams and other activities in parallel
5%
Establish a timely feedback mechanism between teachers and students
9%
Do well in the extension from inculcation teaching to enlightening teaching Reasonable design of exhibition space of stadiums and gymnasiums
10%
17%
It can be seen from Table 1 and Fig. 3 that 30% of the experts believe that it is necessary to improve the physical education concept of physical education teachers on the road to the construction of the university’s sports concept teaching, while 23% of the experts believe that we should pay attention to the inheritance of traditional sports culture. And innovation, that is, while attaching importance to the development of traditional sports, it should also integrate innovative elements and develop sports culture. Another 36% of experts believe that we should optimize the teaching mode of physical education courses. While carrying out participatory practical teaching such as team competitions, various activities, such as clubs, societies and sports teams, should also be carried out at the same time. Establish a timely feedback mechanism for teachers and students, and do a good job of expanding from indoctrination teaching to heuristic teaching. In addition, 17% of experts believe that the exhibition space of stadium tubes should be designed reasonably.
Design and Implementation of Auxiliary Platform for College Students’
685
Improving the sports concept quality of Physical Education Teachers Pay attention to the development of traditional sports Integrating innovative elements Implement team competition and other participatory practice teaching Clubs, clubs, sports teams and other activities in parallel Establish a timely feedback mechanism between teachers and students Do well in the extension from inculcation teaching to enlightening teaching Reasonable design of exhibition space of stadiums and gymnasiums 6% 5% 11% 9% 36% 12% 10%
17% 30%
Fig. 3. Analysis of expert interview results
5 Conclusions As a compulsory course for students, university physical education is not only to improve the physical fitness of students, but also to cultivate high-quality sports talents for China, to infiltrate sports concepts in the physical exercise process, and to form a link between sports and sports quality training. Therefore, this paper designs an auxiliary platform for college sports concept learning. In the research, in order to verify the influence of sports concept learning on university physical education, this article introduces the university sports curriculum into the sports concept learning auxiliary platform and analyzes it. It has an impact from five aspects: participation in sports, physical health, sports skills, mental health and social adaptation. This article believes that in order to establish a good physical education class in colleges and universities, it is necessary to improve the teaching level of physical education philosophy of physical education teachers, pay attention to the inheritance and innovation of traditional physical culture, and optimize the teaching methods. Physical education courses, and logically design the projection space for stage management.
References 1. Kirk, D.: Physical education, youth sport and lifelong participation: the importance of early learning experiences. Eur. Phys. Educ. Rev. 11(3), 239–255 (2016) 2. Aleksic-Veljkovic, A., Stojanovi´c, D.. Evaluation of the physical fitness level in physical education female students using “Eurofit-Test”. Int. J. Sports Sci. Phys. Educ. 2(1), 1–15 (2017) 3. Pence, H.E., Williams, A.J.: Big data and chemical education. J. Chem. Educ. 93(3), 504–508 (2016)
686
B. He and J. Zhong
4. Peng, W.: Research on online learning behavior analysis model in big data environment. Eurasia J. Math. Sci. Technol. Educ. 13(8), 5675–5684 (2017) 5. Ding, J., Sugiyama, Y.: Exploring influences of sport experiences on social skills in physical education classes in college students. Adv. Phys. Educ. 07(3), 248–259 (2017) 6. Ding, Y., Li, Y., Cheng, L.: Application of Internet of Things and virtual reality technology in college physical education. IEEE Access PP(99), 1 (2020) 7. Xin, H., Yijian, C., Jie, A.: Study on the construction of the mathematical model of the influence factor of lifelong physical education for college students. Int. J. Eng. Model. 31(1), 111–117 (2018) 8. Yang, D.M.: Research on college students’ individual physical health promotion - based on the perspective of physical education reform. Agro Food Ind. Hi Tech 28(1), 937–941 (2017) 9. Liu, Z., Wang, Y., Cai, L., Cheng, Q., Zhang, H.: Design and manufacturing model of customized hydrostatic bearing system based on cloud and big data technology. Int. J. Adv. Manuf. Technol. 84(1–4), 261–273 (2015) 10. Kopytov, V.V., Kharechkin, P.V., Naumenko, V.V., et al.: Technology and architecture for a high-speed sensor data collection system. Int. J. Civil Eng. Technol. 9(10), 224–233 (2018) 11. Shum, K.W., Chen, J.: Cooperative repair of multiple node failures in distributed storage systems. Int. J. Inf. Coding Theory 3(4), 299–323 (2016) 12. Abbas, H., Maennel, O., Assar, S.: Security and privacy issues in cloud computing. Ann. Telecommun. 72(5–6), 233–235 (2017)
Application of Big Data Analysis and Image Processing Technology in Athletes Training Based on Intelligent Machine Vision Technology Juan Zhong and Bo He(B) Kunming University, Kunming 650500, Yunnan, China
Abstract. With the rapid development of machine vision technology in recent years and the rapid increase of various video data, image processing and behavior recognition based on video data have become one of the hot research topics nowadays. This paper mainly studies the application of big data analysis and image processing technology in athlete training, the time-segmenting two-stream network based on sparse sampling adopted in this paper can better express the longterm motion characteristics. Firstly, the continuous video frame data is divided into several segments, and each segment of video frame sequence is randomly sampled to form a short sequence data containing user actions, and then the feature extraction is carried out by using double-stream network. In this paper, the proposed algorithm model is simulated and compared with other algorithms. The experimental results show that the recognition rate of the proposed model is the best among several algorithms. Keywords: Big data analysis · Image processing · Deep learning · Motion recognition · Machine vision technology
1 Introduction Image recognition technology is one of the research hotspots in recent years. It only needs to be based on simple images with the help of high-performance computers to independently identify people and objects, such as face recognition based on background difference technology and HOG (Histogram of Oriented Gradient) features. The images here not only refer to the pictures collected by CCD (Charge-Coupled Device) or CMOS (Complementary Metal Oxidesemiconductor) camera in the traditional sense, it can also be infrared image collected by infrared sensor, 3D image collected by lidar, depth image collected by binocular camera, etc. [1, 2]. In recent years, the field of artificial intelligence has been continuously explored, face recognition technology has developed rapidly, recognition accuracy has been continuously improved, and effective recognition scenes have also been continuously extended. With the development of image recognition technology, the method of using movement recognition technology to analyze the technical movements of athletes in the process of movement to improve the © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 687–693, 2022. https://doi.org/10.1007/978-981-16-5857-0_87
688
J. Zhong and B. He
quality of training has gradually attracted people’s attention. By recognizing the movement video in the process of training, we can master the movement state of people. At the same time, the relevant parameters identified by computer analysis can more intuitively reflect the standard degree of athletes’ training movements, which is helpful for coaches and athletes to analyze technical movements and find out problems, so as to improve the training quality of athletes [3]. The development of image recognition technology has gone through many stages, from the traditional recognition technology such as classification extraction to the introduction of Artificial intelligence-AI. Now image processing has become an important topic in the field of AI. The most basic part of image processing includes image segmentation and recognition technology, which is also the difficulty in image processing [4]. The analysis and processing of image data is very difficult, and imitating human to conduct image operation is a research focus at present. Researchers have developed different computer programs to simulate human’s recognition process of image information [5]. Pattern recognition is one of the most important means of image recognition. The method based on this recognition technology needs to analyze a large number of data and information. At the same time, combining with the experience of experts and the existing understanding, it can make corresponding judgments on numbers, characters, curves and shapes through mathematical reasoning and a large number of computer calculations. It completes the recognition, evaluation and operation of images similar to human beings [6]. This paper mainly uses big data analysis and image processing technology to analyze athletes’ receiving movements, so as to intuitively understand the standard degree of their own training movements and improve the quality of training.
2 Big Data Analysis and Image Analysis of Athletes’ Training 2.1 Overview of the Big Data Analysis Big data is the product of the information age and mobile Internet, and plays a positive role in the management of all walks of life. There are different opinions on its definition. At present, it is highly recognized by scholars at home and abroad that gartner, a research institution, holds that big data is to enhance the decision-making ability and insight, make the process more optimized and have the information with high growth rate and multiple patterns under some new processing conditions. The fields related to big data include both academic fields and application fields (especially business fields), including fields of computer, statistics, mathematics, economics, management, social phenomena and natural phenomena. From the perspective of statistics, big data is defined as: Big data is not limited, fixed, discontinuous and unexpandable structural data that is designed manually and obtained by traditional methods, but all types of data that can be automatically recorded, stored and continuously expanded based on modern information technology and tools and greatly exceed the capacity of traditional statistical recording and storage [7]. Other scholars believe that big data is different from the concept in the traditional sense, including not only single numbers, but also all stored text, image, video, audio and other unstructured data [8]. This paper argues that big data is more like a strategy than a technology. Its core idea is to manage massive data and extract value from it in a
Application of Big Data Analysis and Image Processing Technology
689
much more effective way than before, so as to obtain information conducive to management and decision making. Big Data Analytic (BDA) is the core of Big Data concepts and methods. It refers to the process of quantitative analysis of Data with diverse types, rapid growth and real content (namely Big Data) to find out hidden patterns, unknown correlation and other useful information that can help decision-making [9]. 2.2 Foreground Extraction Traction Momotion Images Frame Difference Method. Frame difference method, also known as frame difference method, is one of the most commonly used moving object detection methods. The main processing idea of frame difference method is to disassemble video information into image information frame by frame and compare the similarities and differences of pixel information between each frame. Specific operation is a range of threshold value, to make two or more frames difference, the image changes over more than a collection of pixel threshold judgment after connectivity, think this collection of pixels for the static background graphics, and exceed the threshold change part of the set of pixels is considered moving object [10]. This difference method is used to extract the foreground. The frame difference method is simple to calculate and fast to process, so it has better real-time performance to deal with foreground extraction of simple scenes. However, as the foreground is extracted only through the background difference and the set threshold value, if the threshold is not selected properly, the extraction result may produce a large error. In addition, the frame difference method is sensitive to ambient noise, and low noise and low contrast targets may not be able to completely extract moving targets. Optical Flow Method. Optical flow method is divided into two methods: global optical flow field and characteristic point optical flow method. Since human eyes have visual delay, when an object moves, a string of “flowing” image information will be presented in the eyes, so it is called optical flow method [11]. The main idea of optical flow method is to transform the problem of foreground extraction into the problem of observing the motion speed of each pixel on the image plane. It compares the correlation between the two frames before and after, finds the corresponding relationship with the previous frame in order to distinguish between stationary and moving objects, and then obtains the foreground extraction results. Since optical flow method tracks feature points, it has a strong ability to extract moving objects in unknown scenes. However, for the noise problem, the presence of multiple light sources, as well as the scene with some shielding, the optical flow method foreground extraction results will appear large errors. In addition, due to the high computational complexity of the algorithm, it is not suitable for the application scenarios with high efficiency requirements. Background Subtraction Method. The main idea of background subtraction is to divide a continuous video into a finite sequence of images and take the average pixel of all the images. Each image data is compared with this mean value, and a set threshold is used to determine whether it belongs to a foreground image or a moving object [12]. Background subtraction algorithm is relatively simple to implement, the foreground extraction for simple background is simple and fast, but for complex background or small target object, the effect will be poor.
690
J. Zhong and B. He
2.3 Action Recognition Network Model In this paper, a time-division convolutional neural network based on sparse sampling strategy is adopted. The time-division convolutional neural network is to segment the whole video and sparsely sample short segments as the network input, and extract the time-series features of optical flow images and the spatial features of RGB images to perform the task of action recognition. Time-segmenting Convolutional Neural Network will first segment a video containing an action into several equal parts, and then randomly extract a short sequence, so that the short fragments generated by sampling can effectively express the motion information in the whole video. For each sample pieces through the shuangliu convolution neural network for feature extraction, the time flow network capture video temporal and spatial structure information flow capture image space appearance of information network, for each of the short fragments generated the shuangliu corresponding network prediction, and finally through a time flow aggregation function integration and the characteristics of the spatial flow network as a whole to video identification results. For a given action video frame data, it is first divided into N video frame sequences of equal length, and then a short-time video sequence is randomly sampled from each part. The network models the short-time sequence extracted from each part as input data, as shown in the following formula: T(T1 , T2 , ...Tn ) = H(G(F(T1 , W), F(T2 , W), ...F(TN , W)))
(1)
Through the standard cross entropy loss and fragment fusion function G, the final loss function is as follows: C C L(y, G) = − yi (gi − log exp(gj )) (2) i=1
j=1
Function G uses the average form to fuse the prediction results of each segment, and its loss is the classification loss of the whole action video. In the process of network parameter learning, the network can learn the features of the whole video, rather than being limited to a single segment. In the training process of time-divided convolutional neural network, the back propagation algorithm is used to update the model parameters iteratively. It can be seen that the gradient of the loss value L of parameter W is shown as follows: ∂L N ∂g ∂F(Tn ) ∂L(y, G) = n=1 ∂W ∂G ∂F(n) ∂W
(3)
When the stochastic gradient descent method is adopted to learn network parameters, the fusion function G will differentiate the predicted results of each video segment respectively. Therefore, the network model can be regarded as learning parameters from the whole video, rather than for a single short video sequence. The sparse sampling strategy adopted by the network model enables each action video to be divided into several short-term video sequences, and each sampling fragment contains only a part of the frame image. Compared with the structure of dense sampling, the computational cost of network training and testing will be significantly reduced.
Application of Big Data Analysis and Image Processing Technology
691
3 Image Recognition Simulation Experiment 3.1 Network Training Parameter Settings The network parameters are learned through the mini-batch stochastic gradient descent algorithm, whose batch size is set to 128 and momentum is set to 0.7. Gradient descent in small batches can be accelerated by matrix and vector calculations, and the variance of update parameters can be reduced to achieve more stable convergence. Each batch can reduce the number of iterations of convergence and make the result of convergence closer to the effect of gradient descent. For the traditional gradient descent algorithm, if the actual objective function plane is locally concave, then a negative gradient will make it point to a steeper position, and such situation near the local optimal value of the objective function will lead to slow convergence speed. At this time, it is necessary to give the gradient a momentum, so that it can jump out of the local optimal and continue to optimize along the direction of gradient descent, so that the network model can converge to the global optimal more easily. 3.2 Network Training Strategy Because the table tennis receiving action data set used for network training is relatively small, there may be overfitting risk when training deep convolutional neural network. To further mitigate this problem, three network training strategies were used to compare the ability to mitigate the risk of overfitting. The first method is to train the spatial and temporal networks directly from zero, using Gaussian distribution to initialize the parameters of the convolutional neural network, without conducting pre-training in any other way. The second method is to pre-train the spatial flow convolutional neural network only. Since the spatial flow convolutional network only uses RGB images as the input data of the network, the convolutional network can be pre-trained by using ImageNet image database, and the network parameters after pre-training are taken as the initial parameters of the spatial flow network. The third method is the pre-training processing method of the cross-input model in which the time stream network is initialized with the RGB model, while the ImageNet data set is still initialized as the pre-training input data of the spatial stream network.
4 Simulation Experiment Results 4.1 Comparison of Different Training Methods As shown in Table 1 and Fig. 1, the independent spatial flow and time flow convolutional neural network is far less effective than the network model recognition with the fusion of two streams in recognizing actions. According to different training methods, due to the category of small data sets, training from the very beginning of double network existed fitting problem, so the worst performance, the preliminary training of flow network and the use of space and time of cross flow network model in the process of the training ways to initialize the shuangliu convolution of the neural network recognition effect is better, recognition rate can reach 93.1%, this also indicates that this pre-training method can effectively reduce the risk of overfitting.
692
J. Zhong and B. He Table 1. Experimental comparison of different training methods
Starting from scratch
Spatial flow network
Time stream network
Double flow network
71.4%
83.6%
86.7%
Space flow
85.2%
83.6%
88.5%
Cross pattern
85.2%
89.4%
93.1%
Fig. 1. Experimental comparison of different training methods
4.2 Comparison of the Recognition Algorithm Results
Fig. 2. Comparison of results of mainstream recognition algorithms
As shown in Fig. 2, the depth of the mainstream video action identification including the shuangliu convolution neural network learning method and C3D convolution neural network (3 d), these methods will be intensive sampling of video processing, and continuous video frames between the subtle changes and, on the intensive sampling processing requires a lot of video data capture information for a long time, not only does continuous frame produce information redundancy, but also increases the cost of network training. The model proposed in this paper can segment the video data through sparse sampling, effectively grasp the time information of the whole video, and avoid the loss of the long
Application of Big Data Analysis and Image Processing Technology
693
time video information caused by intensive sampling processing. Moreover, the effective improvement of the fusion method also makes the network identification effect have a high reliability.
5 Conclusions This paper describes the foreground extraction and feature tracking algorithms used in video image data processing, and compares and analyzes the advantages and disadvantages of various foreground extraction methods. Aiming at the target movement information in video first introduced the optical flow characteristics, to extract the image of a light flow in the continuous video frames, this paper divided sparse sampling strategy will video segmentation and random sampling to generate a short video sequence, then the shuangliu convolution neural network respectively applied to the short sequences of feature extraction. Finally, the fusion mode of the two-stream network is analyzed, and the strategy of convolutional layer fusion is adopted to fuse the spatial and temporal characteristics of the two-stream network. In the fusion layer, 3D convolution and pooling operation are used to perform fusion operation, so as to effectively capture the motion timing sequence information of the target.
References 1. Mauro, Z., Veronica, R., Fabio, L., et al.: A monitoring system for laying hens that uses a detection sensor based on infrared technology and image pattern recognition. Sensors 17(6), 1–17 (2017) 2. Meng, F., Xu, B., Zhang, T., et al.: Application of AI in image recognition technology for power line inspection. Energy Syst. (1), 1–23 (2021) 3. Cheng, F., Zhang, H., Fan, W., Harris, B.: Image recognition technology based on deep learning. Wireless Pers. Commun. 102(2), 1917–1933 (2018) 4. Xu, D., Yang, L., Zeng, J., et al.: Movement measurement method for derrick hoisting of offshore platform using image recognition technology. Harbin Gongcheng Daxue Xuebao/J. Harbin Eng. Univ. 38(11), 1733–1738 (2017) 5. Shadiev, R., Wu, T.T., Huang, Y.M.: Using image-to-text recognition technology to facilitate vocabulary acquisition in authentic contexts. ReCALL 32(2), 195–212 (2020) 6. Sun, C., Wang, L., Wang, N., et al.: Image recognition technology in texture identification of marine sediment sonar image. Complexity 2021(2), 1–8 (2021) 7. Wei, Y., Pan, D., Taleb, T., et al.: An unlicensed taxi identification model based on big data analysis. IEEE Trans. Intell. Transp. Syst. 17(6), 1703–1713 (2016) 8. Tawalbeh, L.A., Mehmood, R., Benkhelifa, E., et al.: Mobile cloud computing model and big data analysis for healthcare applications. IEEE Access 4(99), 6171–6180 (2017) 9. Zhe, L., Choo, K., Zhao, M.: Practical-oriented protocols for privacy-preserving outsourced big data analysis: challenges and future research directions. Comput. Secur. 69(Aug), 97–113 (2016) 10. Boulfrifi, I., Housni, K., Mouloudi, A.: Automatic moving foreground extraction using random walks. Indones. J. Electr. Eng. Comput. Sci. 15(1), 511–516 (2019) 11. Li, X., Liu, K., Dong, Y.: Superpixel-based foreground extraction with fast adaptive trimaps. IEEE Trans. Cybern. 48(9), 2609–2619 (2018) 12. Shi, J.F., Ulrich, S., Ruel, S.: Unsupervised method of infrared spacecraft image foreground extraction. J. Spacecr. Rocket. 56(6), 1–10 (2019)
Design of a Smart Elderly Positioning Management System Based on GPS Technology Qianqian Guo(B) Zhujiang College of Tianjin University of Finance and Economics, Tianjin 301811, China
Abstract. As the aging problem continues to deepen, the state and government have issued a series of policies to promote the development of elderly care services, aiming to improve the quality of elderly care services. With the advent of the information age, intelligent elderly care has become the focus of social attention. The living environment of the elderly is becoming more and more complex, and the traditional nursing model can not meet their needs for daily care, medical rescue and spiritual comfort. At the same time, it also increases the problems of care for the children of the elderly and psychological comfort. In recent years, the issue of the missing of the elderly has frequently appeared in the news around the world. Based on GPS positioning technology, this article designs a positioning management system in the smart elderly care model. This article first explains the background, current situation and significance of the research, then introduces the concept of smart elderly care and the working principle of the GPS system, and then designs and explains the work flow and functions of the positioning system. This paper also designed a questionnaire survey experiment to investigate the problems encountered by the staff of the elderly care institutions, community medical staff and residents who need to support the elderly in elderly care services and their needs for positioning services. In the survey results, 86% of people encountered problems that the elderly could easily forget; 67% of people encountered the problem of depression in the elderly; 53% of people encountered problems of the elderly being lost; 47% of people encountered problems with mobility problems. Keywords: GPS technology · Smart elderly care · Positioning management · System design
1 Introduction With the acceleration of the pace of life in modern society, the current young and middleaged generation is affected by external pressures such as life, environment, and economy. They have to concentrate most of their attention on their own work. It is difficult to ensure enough time and energy to take care of and Accompany the elderly. Under such circumstances, the elderly living at home are gradually facing problems such as unattended life, uncomfortable mentality, and difficulty in discovering abnormal situations. The traditional old-age care model has to be transformed. With the rapid development of science and technology, the smart pension model incorporating modern technology has triggered © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 694–701, 2022. https://doi.org/10.1007/978-981-16-5857-0_88
Design of a Smart Elderly Positioning Management System
695
a research boom. In the smart elderly care model, the elderly’s needs for retirement protection, life care, medical services, and cultural life are gradually being realized. Most of the elderly have poor physical fitness and are prone to confusion. Therefore, they tend to get lost when they travel alone. Most of them lack the knowledge of using modern smart devices, so once they get lost, they are likely to lose contact and become dangerous. Therefore, the positioning management function in the smart elderly care model is very important. Smart elderly care has become the best choice for many elderly care institutions and families, and more and more scholars are also exploring how smart elderly care should be carried out. Jin X proposed a large-scale medical hospital-centered “medical and nursing smart linkage” pension system. Through Internet technology, this system can communicate with institutional pension models, community pension models, and family pension models. Experiments have proved that the comprehensive use of Internet technology and artificial intelligence technology has very important research value in solving the current problems in my country’s pension insurance industry [1]. Cui Y, Zhang L, Hou Y and others have built a smart home care service platform based on machine learning and wireless sensor networks. This research optimizes and improves the wearable physiological parameter collection system, and focuses on the construction of the new system. The design and implementation of the hardware and software of the physiological parameter acquisition module in the middle has taken care of the physiological characteristics of the elderly [2]. GPS positioning technology has a wide range of applications in many fields. Guo J, Fei Y, Shi J and others have studied the GPS functional model of tropospheric tomography, including the observation equation, horizontal constraint equation and vertical constraint equation weight distribution method. Finally, they proposed a weighting method that can adaptively adjust the weights of various equations and enable the three types of equations to have a posterior unit weight variance [3]. The aging problem is also a major problem facing our country. In order to let them have a healthy and happy old age, smart elderly care is an inevitable choice. The intelligent elderly care positioning management system can ensure the safety of the elderly, and the elderly will travel more freely, which is beneficial to the physical and mental health of the elderly. In addition, the smart elderly care positioning system will also promote the development of the elderly care industry and promote the improvement of elderly care services.
2 Design of Smart Elderly Positioning Management System Based on GPS Technology 2.1 Smart Pension Smart elderly care refers to the transformation and optimization of traditional elderly care models through modern Internet technology, artificial intelligence technology, big data technology, etc., to improve the quality of elderly care services, make elderly care management more scientific and humanized, and provide for the elderly. A safe and comfortable living environment enhances the people’s sense of happiness.
696
Q. Guo
The population base of our country is large, and the elderly will occupy a large proportion at this stage or in the future. The current old-age care model is generally home-based care and institutional care. No matter which type of care is used, modern information technology must be used to improve service levels, such as medical care. Therefore, the rapid use of information system equipment will inevitably promote the sustainable development of pension unemployment. However, there are still many problems in various models of the current pension services. (1) At this stage, most of the elderly have a generally low level of education, and it is difficult to accept new things, which also limits their use of modern smart devices, so they are easily out of touch with society; (2) The current smart pension service software products are relatively simple in form and function, and they are updated slowly, and there is no systematic integration of pension service resources. It can be said that both the enterprise and the government are in the groping stage; (3) The management philosophy of elderly care institutions has not changed, and the information management methods have not been implemented. As a new concept of smart elderly care, relevant information dissemination on the Internet is relatively slow, so the related businesses of many elderly care institutions have not been promoted. For elderly care institutions and families, it is what they need most for the elderly to be in a safe and non-hazardous state. The elderly also need to go out for activities and cannot stay at home or in an institution all the time. Children are busy with work and cannot be with them all the time. In this way, going out for the elderly becomes an activity with potential safety hazards. Most modern mobile smart devices have positioning functions, such as smart phones and iPads. However, these devices often only have a simple positioning function. If the positioning can be achieved while returning to the state of the elderly at the time, then the travel safety of the elderly can be more effectively guaranteed, and public resources can also be used rationally. Elderly people are more forgetful and sometimes forget to bring their mobile phones, so mobile status devices such as wristbands, wheelchairs, watches, thermos cups and so on are designed with positioning functions. As long as the old man brings any of these things, the system can analyze the state of the old man from the old man’s movement trajectory data, give feedback to the management staff and give an alarm when necessary. 2.2 GPS System Positioning Principle When the artificial earth satellite is moving around the earth, it will continuously emit radio waves to the ball [4, 5]. From these radio waves, relevant satellite data such as satellite ephemeris, satellite clock error, and second count can be obtained [6, 7]. The satellite receiver on the ground determines the specific orbit of the artificial satellite according to the received signal. After calculating the relevant formula, the position of the receiver relative to the artificial satellite can be obtained, and the coordinates of the receiver can be determined. In the space collection, the coordinates of a point are generally required only for the coordinates of the other three points [8, 9]. However, due to
Design of a Smart Elderly Positioning Management System
697
the inconsistent clock difference between the satellite and the receiver, four satellites are needed here to accurately locate the position of the receiver [10, 11]. The coordinates of the receiver can be solved by establishing equations based on the data of four satellites. The propagation speed of radio waves in space is consistent with the speed of light, denoted as c, and the specific equation formula is as follows [12]: (1) R1 = (x1 − x)2 + (y1 − y)2 + (z1 − z)2 + (Vt1 − Vt0 ) ∗ c R2 = R3 = R4 =
(x2 − x)2 + (y2 − y)2 + (z2 − z)2 + (Vt2 − Vt0 ) ∗ c
(2)
(x3 − x)2 + (y3 − y)2 + (z3 − z)2 + (Vt3 − Vt0 ) ∗ c
(3)
(x4 − x)2 + (y4 − y)2 + (z4 − z)2 + (Vt4 − Vt0 ) ∗ c
(4)
The meaning of each parameter in the four equations is as follows: x, y, z are the spatial rectangular coordinates of the coordinates of the point to be measured. x i , yi , zi (i = 1, 2, 3, 4) respectively represent the space rectangular coordinates of satellite 1, satellite 2, satellite 3, and satellite 4 at time t, which can be obtained from satellite navigation messages. (x, y, z) are the coordinates of the receiver. Vti (i = 1, 2, 3, 4) is the clock difference corresponding to No. 1, No. 2, No. 3 and No. 4 satellites, respectively, provided by the satellite ephemeris. Vt0 is the clock error of the receiver. From the above four equations, the coordinates x, y, z of the point to be measured and the clock difference Vt0 of the receiver can be calculated. Not only the coordinates of the receiver are obtained, but the time is also checked and corrected. 2.3 Software Design of the Intelligent Elderly Care Positioning Management System Network Node Communication Program. According to the adaptability of the hardware, the network node application uses a modular structure. Its main functions include data collection, data processing, interrupt service, etc. The main program flow of the relay station is to initialize the serial port first, then send the first address to the slave, and the slave responds. After receiving the data from the slave successfully, the next address is sent to the slave, the slave responds, and the host receives the data. If the data is the last piece of data, return to the first address to send, otherwise continue to send the next address. The work flow of the interrupt program of the serial port of the relay station can be described as first clearing the RI to protect the scene. After the host calls successfully, it responds to the host’s request, transmits all the data to the host, and finally restores the scene. Positioning System Information Management. This module needs to implement user hierarchical management, report query statistics, user authority management and other
698
Q. Guo
customized functions, etc. Collect the location information of the elderly through wristbands, watches or other GPS positioning devices, and visualize the location data to understand the specific situation of their travel. When the elderly encounter an emergency, they can touch the alarm function of the positioning device, and the system will automatically alarm. When the elderly cannot complete the alarm activity, the system will pre-alarm according to the length of time the elderly stays. The main function of the alarm is to query nearby medical care. Personnel, send them relevant help information and location information of the elderly. The management of the staff by the system includes staff attendance service, daily affairs management, work quality management and so on. In addition, the system also has related services for the elderly’s auxiliary management, such as the elderly’s diet, living time, and physical health information. Children can also learn about the elderly’s physical condition remotely.
3 Survey on Demand for Smart Elderly Positioning System Based on GPS Technology 3.1 Experimental Content To design a smart elderly positioning system based on GPS technology, it is necessary to conduct a survey of relevant users in the market. According to the results of the survey, the overall design of the system is given, and then more prominent features are designed in more detail. Based on the needs of the system, this paper uses a questionnaire survey to survey the staff of elderly care institutions, community medical staff, and residents who need to support the elderly. The content of the survey mainly includes the urgent problems encountered in the work of caring for the elderly, and the positioning system Is it important to the elderly care services? 3.2 Experimental Process Design the questionnaire according to the questionnaire design standard and the content of the experimental survey, and conduct a preliminary survey after the design is completed. After analyzing the results of the pre-survey, the questionnaire was revised and adjusted to determine the final content and form of the questionnaire. The form of the questionnaire survey was paper questionnaires. A total of 236 questionnaires were issued, and 218 questionnaires were effectively returned. The effective response rate of the questionnaire was 92.37%. Perform data statistics and analysis on the survey results.
4 Analysis of the Survey Results of the Demand for the Smart Elderly Positioning System Based on GPS Technology 4.1 Analysis of the Results of the Positioning Needs Survey The statistics of the survey results of the question “Do you think the positioning of the elderly is important for elderly care services” are as follows:
Design of a Smart Elderly Positioning Management System
699
As shown in Fig. 1, among the 218 questionnaires collected, 143 respondents believed that the positioning function is important for elderly care services, accounting for 66% of the sample; 44 people believe that positioning is not of great significance to elderly care services. It accounts for 20% of the sample; 31 people think that the positioning function is of general significance for elderly care services, accounting for 145 of the sample.
Fig. 1. Positioning needs survey results
4.2 Analysis of the Problems Encountered in the Service Work for the Elderly In the questionnaire, “What are the problems you encounter in ordinary elderly care services?” asked, the following are the statistical results. Table 1. Survey results of common problems in elderly care services Number Depression occurs 158 Easy to get lost
125
Easy to forget
203
Can not move
110
Other problems
87
As shown in Table 1 and Fig. 2, in the work of elderly care services, the most common problem for the elderly is that they are easy to forget, such as forgetting to take
700
Q. Guo
medicine, adding clothes, eating, etc., so the elderly management assistance function is added to the intelligent elderly care positioning system, including diet collocation, daily life management, etc. The survey results also show that 53% of the interviewees said that they have encountered the problem of getting lost, which is very dangerous for the elderly with poor physical condition and unclear thinking. Therefore, the development of location services is very necessary. In addition, depression in the elderly is also a very common problem. 67% of the respondents said that they have encountered this problem. The elderly often feel that they are burdensome if they do not have a job. In addition, their children are busy with work or other things and lack communication with the elderly. Communication, the elderly will be mentally depressed over time.
Fig. 2. Survey results of common problems in elderly care services
5 Conclusions With the development of Internet technology and the continuous progress of society, people pay more attention to the elderly. The traditional nursing model can no longer meet the higher and higher requirements of the elderly. In order to solve this situation, it is very necessary and practical to design and develop a smart elderly person position management system. The GPS-based smart elderly care positioning management system has mature development technology, low cost, and easy operation. It is the best choice for elderly care institutions and residents who need to support the elderly. This will also promote the innovative development of the elderly care industry and improve the quality of life of the elderly in their later years.
Design of a Smart Elderly Positioning Management System
701
Acknowledgments. Major pre research projects of Zhujiang College of Tianjin University of Finance and economics in 2019 (No. ZJZD19-04).
References 1. Jin, X.: “Medical-and-care wisdom linkage” pension model research and exploration. Strategic Study Chin. Acad. Eng. 20(2), 92–98 (2018) 2. Cui, Y., Zhang, L., Hou, Y., et al.: Design of intelligent home pension service platform based on machine learning and wireless sensor network. J. Intell. Fuzzy Syst. 40(2), 2529–2540 (2021) 3. Guo, J., Fei, Y., Shi, J., et al.: An optimal weighting method of global positioning system (GPS) troposphere tomography. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 9(12), 5880–5887 (2017) 4. Cantis, S.D., Ferrante, M., Kahani, A., et al.: Cruise passengers’ behavior at the destination: investigation using GPS technology. Tour. Manage. 52(2), 133–150 (2016) 5. Hardy, A.L., Hyslop, S., Booth, K., et al.: Tracking tourists’ travel with smartphone-based GPS technology: a methodological discussion. Inform. Technol. Tour. 17(3), 1–20 (2017) 6. Sarker, M.H., Haque, S., Rahman, M., et al.: Integrated use of remote sensing, GIS and GPS technology for monitoring the environmental problems of Shyamnagar. J. Environ. Sci. Nat. Resour. 12(1–2), 11–20 (2021) 7. Cao, Y., Lin, Y., Wu, C., et al.: The roller compacting trajectory of roller based on GPS technology. Harbin Gongye Daxue Xuebao/J. Harbin Inst. Technol. 51(1), 65–70 (2019) 8. Spilker, H.S., et al.: Understanding the role of technology in care: the implementation of GPS-technology in dementia treatment. Ageing Int. 44(3), 283–299 (2019) 9. Cochrane, M.M., Brown, D.J., Moen, R.A.: GPS technology for semi-aquatic turtle research. Diversity 11(34), 1–16 (2019) 10. Chen, G.: Application of GPS technology in space geological survey. Arab. J. Geosci. 12(23), 1–5 (2019). https://doi.org/10.1007/s12517-019-4842-x 11. Milojkovic, B., Jovanovic, J.: Method of topographic inventarization and GPS technology in geospatial modeling. Glasnik Srpskog Geogr. Drustva 98(2), 59–82 (2018) 12. Mahind, R.N., Chautre, V.G.: Android based public transportation system using GPS technology. IARJSET 4(4), 61–63 (2017)
Design and Implementation of Enterprise Public Data Management Platform Based on Artificial Intelligence Zhongzheng Zhao(B) and Xiaochuan Wang University of Electronic Science and Technology of China, Chengdu 611731, Sichuan, China
Abstract. At present, the construction of the information management system of domestic enterprises is becoming more and more perfect, and the data application is developing from the stage of original data collection and data use to the stage of data sharing between systems and refined data application. In the process of data application construction, it is necessary to form an enterprise data application sharing platform through a data support platform to provide a unified data standard system, data analysis theme, and data access interface for the data application of the upper business system. Therefore, this paper proposes a plan for building an enterprise information integration platform. The purpose is to integrate the business processes of the information system, establish an important foundation for building a data support platform, and realize centralized data storage and integration, reduce data redundancy, and enhance the value of data utilization. This article first studies the status quo of enterprise public data management, analyzes the common problems in the construction of enterprise data platforms, conducts business research and requirements analysis specifications, and then uses query customization tools, data collection tools and data dictionary comparison tools to design and a support software for an enterprise public data management platform is realized. Finally, this paper conducts a data query entry function test and an interactive performance test between system functions on the software. The results show that the enterprise public data management platform designed in this paper can perform normal data query and data entry, and the system functions can be successfully tested. Keywords: Data management · Data query · Data entry · Functional interaction
1 Introduction After entering the information age from the industrial age, we deeply feel that we are in the era of the knowledge explosion. From small input methods to large search engines, you can see the effective use of knowledge. As various smart products are released one after another, it is believed that human beings will not be far from the mature smart era, and these are all due to data mining and knowledge accumulation [1]. A huge user group has created a huge amount of user data. The management and effective use of data © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 702–710, 2022. https://doi.org/10.1007/978-981-16-5857-0_89
Design and Implementation of Enterprise Public Data Management Platform
703
will directly affect how much wealth the company can dig out of this data mountain. Up to now, the vast majority of corporate public data in my country has always been managed and called using self-built system application platforms, and there is a large amount of redundant data between various enterprises and between departments [2]. Individual small companies are still within the tolerable range, but the massive data collection, sorting, query, storage and other data work in large companies will cause a serious burden. In the current “big data” era, the requirements for designing enterprise data management models are getting higher and higher. How to process data faster, more conveniently, and more efficiently has become the primary task of enterprise data management. To achieve this goal, a complete and practical data management platform is indispensable. At the same time, it must be able to integrate its own data, call public data and other functions [3]. Chinese scholar Wu Shan believes that big data technology lays a foundation for further strengthening the scientific decision-making ability of public management, service level, adapting to the individual needs of the people, and optimizing the level of social risk management. It is necessary to build a big data mindset and establish a big data-centered, a new model of collaborative management with the government as the leading and ordinary people as the main body [4]. Han Zhiyin believes that the coverage of network technology makes data analysis technology widely used. This technology has a certain effect on the application of enterprise data management, and to a certain extent promotes the continuous improvement of data management efficiency and work quality [5]. Wang Yan believes that today’s enterprises are facing huge challenges. The complex and rich business management environment affects the judgment of decision makers, and affects the enterprise’s big data management and knowledge management. At present, the emergence and rapid development of the Internet have greatly improved today’s big data. Big data has become more and more complete, allowing companies to complete various tasks and face various challenges with ease [6]. With the continuous development of computers, the effective management of electronic data has become a global problem. In the database system, operations such as data calling and storage are more efficient and faster, laying the foundation for efficient data management. The best practice guide for the structure of the data platform is to optimize the structure of the data platform and achieve the purpose of saving costs and improving efficiency through rational platform construction. The amount of data is increasing year by year, and the interrelationship, application and retrieval of various types of data are facing severe tests. A powerful and superior data management platform can effectively manage and call data [7]. Nowadays, the vigorous development of the big data industry has already had a certain impact on the design and development of data management platforms in many countries. There is still a certain gap in the research and development progress of the domestic public data management platform compared with the international cutting-edge technology, but overall it will not lag too much, especially the major domestic platform design groups have made good results in the past two years. The overall structure of the platform is reasonable and complete, which can effectively integrate data and become the basic support platform for urban applications.
704
Z. Zhao and X. Wang
2 Design and Implementation of Enterprise Public Data Management Platform Based on Artificial Intelligence 2.1 Common Problems in the Construction of Enterprise Data Platforms One of the goals of data platform construction is to achieve data integration. Data integration should be based on the investigation, integration and optimization of existing business processes. At the beginning of the establishment of the data platform, business processes were not connected between businesses and between departments. Most of the application system data is in the data platform, and most of the system functions are also implemented in the data platform. But why the data platform is not widely used? The reason is that the demand for the data platform should come from the management. The demand is not very clear, and it is not urgent [8]. Many companies don’t know what the problem is that the data platform needs to solve, they don’t know the specific goals, and they don’t understand the real users of the data platform. Providing authoritative public data is an important function of the data platform. At the same time, a “convention mechanism” is established for these public data. The “convention mechanism” refers to the attributes of an entity, that is, what fields, types and lengths of the data table are available. Value range, such as key entity data such as organizations, personnel, wells, etc. If these data cannot be maintained and effectively managed in time, then the basic value of the data platform will not be reflected [9]. The construction of the information system must comply with relevant standards and follow relevant procedures, otherwise there will be conflicts in data standards. Business data is not the data of a certain department, but a part of the entire business process of the branch. Only unified planning and unified design can guarantee the uniqueness of data sources. 2.2 The Overall Data Architecture of the Enterprise Public Data Management Platform The establishment of the public database is to establish a “convention mechanism” for public data. The public database is a part of the integrated platform. The data source of the public database is provided by various professional systems under the integrated platform. The basic data layer of the data platform is provided by the public database. It is composed of data extracted from other external business systems. The data in the public database can be managed as part of the master data of the data platform. The first layer of business data layer, completes a comprehensive set of business data information for each source system, provides standardized and shared data according to the system logic of each source system, and presents the entity in the form of business data, which can solve the problem of cross-border between multiple systems [10]. The second basic indicator layer extracts the indicator data from the business data layer to this layer according to the needs of future indicator analysis, refines the business data layer of the source system, and forms the indicator data required by the application, which is the business data layer of the source system. The data is processed and processed to complete the basis of the indicator logic calculation. The third layer of business analysis layer, in accordance with the needs of specific application presentations, extracts indicator data and other application presentation requirements from the underlying model to this layer,
Design and Implementation of Enterprise Public Data Management Platform
705
refines business data and basic indicator layer data, and comprehensively processes them according to the needs of the application presentation, so as to form the data needed for application presentation. In the fourth layer, the business theme layer, according to the needs of specific applications, the data of the business analysis layer is simply processed and displayed through a display platform such as reports or charts [2]. When processing the data of the business analysis layer, the data needed for application presentation should be formed according to the requirements of the specific application presentation, and then the platform is presented to complete the application requirements of the specific topic. 2.3 Business Research and Demand Analysis Specifications In the process of business research and business analysis, software should clarify which results are produced, such as business requirements specification, business requirements analysis report, system design report, etc. Whether these technical documents are complete and their descriptions are accurate, play a very important role in project management and later system development, maintenance, and upgrade. Therefore, at the beginning of software system research and development, it is necessary to clarify what document specifications are there. The software requirements development process describes the methods, steps and work results of the software requirements development. These are all to provide guidance for the software requirements development work, and finally through the specification of the requirements development process to form highquality software requirements. In the software development plan In the process, the plan for defining project activities is established and maintained, and the project quality goals and project performance goals are established, which can more reasonably arrange project resources, control project costs, and complete software projects on time [11]. In the design of the public data platform, one of the functions includes the realization of unified authority management for each professional system under the integrated platform, with the purpose of realizing single sign-on and unified authentication. First, the user logs in to the authentication page of the authentication center, if the authentication is passed, a token is obtained, and the application system credential is assembled through the token, and the credential is obtained to enter the authentication center home page, and the list of systems authorized by the user is displayed. Subsystem, the subsystem must verify whether he has login credentials, if so, directly enter the subsystem authorization page, otherwise verify whether he has a token, if not redirected to the authentication center login verification page, if there is a token to obtain the application system Credentials, if successful login to the subsystem, if unsuccessful, redirect to the authentication center login verification page.
3 Design and Implementation of Supporting Software for Public Data Management Platform 3.1 Query Customization Tool How to provide convenient, flexible and efficient data query services for users of public data management platforms has become an important issue for public data management
706
Z. Zhao and X. Wang
platforms. To solve this problem, a new framework is proposed, namely metadata-driven query customization. When the tool is customized, it is closely related to the data dictionary and provides customized reports for editing tools and dictionary tools. The query customization tool is mainly to realize the customization, modification, deletion, query, and download functions of related tables in the public library. Database administrators use custom tools to connect related tables in the database, integrate the relevant information they want, and integrate the information they want to form a certain view, and use these views as report graphics and other forms for display. The customization module mainly selects the source data table according to the organized data center dictionary, and then selects the required source field. In this process, one or more conditions are constructed to filter and filter data, so as to realize the customization of a single table or the customization of multiple tables. First, the data administrator enters the customization interface and selects one of the customized reports, modify reports, and delete reports. To delete a report, tick the checkbox in front of the report. The customization interface is designed in a wizard manner, which reduces the difficulty of user customization and improves the efficiency of customizing a report. The first step of the customization process is to select the table and query conditions that need to be customized, the second step is to select the field information that needs to be customized, and the third step is to define the header information. When these three steps are completed, the metadata describing the user’s needs are stored in the database. The user enters the query interface, selects the report to be queried, and directly displays the query result. 3.2 Data Collection Tool The data collection tool is mainly aimed at the completion of the data collection task, taking the public library as the target library to complete the data collection. Data collection uses the view collection form to make data collection pertinent and flexible. You can add, delete, modify, query, and review data in the public library to ensure the quality of data collection. The collection view is completed by the view customization tool, the professional navigation tree in the tool is maintained by the dictionary maintenance tool, and the organization of the view is based on the professional organization. The navigation module mainly completes the view navigation task of the data, and provides navigation for the user to find the collected view quickly and conveniently. The acquisition module is the core module of the data acquisition tool. It mainly completes the task of data entering the machine. The module itself includes more functions. Complete the operation of the data in the interface. When saving, the tool will complete a collection of data according to the operations performed. Provide data import function for users who use templates to collect data. The data positioning function facilitates users to find the operating data they need from a large amount of data. Quality constraints mainly complete the constraints on the data range, the data width, and whether the data is empty. The data audit module mainly completes the last audit before the data enters the machine, which is mainly completed by the corresponding auditors. The authority control module is responsible for providing authority control for the entire data collection tool and opening corresponding authority for different users. The authority here depends on the authority management tool. The collection method adopted by the data collection tool is view customization collection, and the corresponding view is customized for collection
Design and Implementation of Enterprise Public Data Management Platform
707
according to the user’s needs. This collection solves the problem of collection redundancy, so that users do not need to care about how much data is in the actual table in the entire database. You need to be clear about what data you have entered. The formulas used in the data collection process are: 1 d (ln Z/Z0 ) d = dx 2 dx
(1)
3.3 Data Dictionary Comparison Tool As an auxiliary detection tool, the tool does not require the user to input information, and the input source of the tool is the logical data dictionary and the physical data dictionary. The tool will itemize the table structure of each entity table in the logical data dictionary and the physical data dictionary. The comparison includes the comparison of attributes such as English code, character type, and whether it is empty, and the output result is to provide the user with the difference between the logical data dictionary and the physical data dictionary in the form of a table. In the case of single table comparison, the user will directly see the information of the table in the logical data dictionary and the information of the physical table with the same name as the logical dictionary table on the web page. After clicking, the user will see the field information of the table in the logical dictionary displayed as a table in the data dictionary table display area of the page, and the physical table field information with the same name as the table in the physical table display area. For the field information displayed in the two display areas, if the background color of the row is white, the logical dictionary table information of the field information is consistent with the field information of the physical table; if the background color of the row is watermelon red, it indicates the logical dictionary table of the field information. The information is inconsistent with the physical table field information or the field does not exist in another exhibition area. At the same time, the page also integrates the function of exporting and comparing detection. This function realizes the comparison information of the current table. On the menu with the professional-related table as the root directory of the professional organization, the user can only see the comparison of the single table by clicking the leaf node. Because a lot of table information is duplicated in the way of professionally organizing the data dictionary, the data is redundant, and it is not convenient for users to use and operate. The navigation menu part contains the construction of two different trees. In the data dictionary table, the data structure of the subject classification method and the professional classification method are different, so the method of construction is also different. The data dictionary is organized by subject classification. In this way, the data dictionary table is unique and does not allow duplication. The logical dictionary display process is the same as the physical dictionary display process, the comparison rules are the same, and the page functions are slightly different. The formulas used in the data dictionary comparison process are: (w) =
Z 1 −2jβx d e ln ( )dx 2 dx Z0
(2)
708
Z. Zhao and X. Wang
4 System Function Test 4.1 System Data Query and Input Function Test According to Table 1 and Fig. 1, we can know that the system has been inquired 20 times and data entered 20 times. During the data query process, the results of 14 queries were correct, the results of 2 queries were wrong, and the results of 4 queries were not available. During the data entry process, 15 data entry was correct, 3 data entry errors, and 2 data entry failures. After the experiment, the reasons for data query errors and data entry errors were analyzed, and it was found that query errors and entry errors were caused by the human factors of the experimenter, and had nothing to do with the system. The inability to query and entry was caused by the instability of the network. Therefore, excluding human factors and network fluctuations, there is no problem with the data query function and data entry function of the system. Table 1. System data query and input function test Data query and entry situation Number of times Data query is correct Data query error
2
Data cannot be queried
4
Data entry is correct
number of times
20
14
15
Data entry error
3
Data cannot be entered
2
system data query and input function test
data query
Data Entry
10
0 Correct result
Incorrect results
No result
data query and entry situation
Fig. 1. System data query and input function test
Design and Implementation of Enterprise Public Data Management Platform
709
4.2 System Function Interaction Test
Table 2. System function interaction test Interactive state
Percentage
Interaction is correct 93% Interaction error
2%
Unable to interact
5%
According to Table 2 and Fig. 2, it can be known that in the process of functional interaction testing of the system, the number of successful interactions between system functions accounted for 93%, the number of interaction errors accounted for 2%, and the number of times that cannot be interacted accounts for 5%. After the experiment, the reasons for the system function interaction error and the system function unable to interact were analyzed, and it was found that the interaction error was caused by the system designer’s interface collocation error, and the system could not interact because of the network stuck. Therefore, eliminating the factors that can be improved, such as interface collocation errors and network instability, the system can realize the successful interaction of various functions.
Fig. 2. System function interaction test
5 Conclusions The construction of a public data management platform is an indispensable step in the process of enterprise information construction. Before platform construction, we must first clarify the scope of public data coverage. Second, we must design the enterprise data platform architecture. Public data construction is an indispensable part of the architecture. According to the actual situation of enterprise data construction, this paper first analyzes the problems existing in the data construction process, and proposes an interface
710
Z. Zhao and X. Wang
scheme between the unified construction system and the self-built system. This article applies the theory of public data management to practice. In the course of practice, I realized that to manage public data well, we must start from the reality of enterprise data construction and study a variety of auxiliary tools. This article mainly proposes a method for establishing a public data platform for a specific enterprise. Due to the limited research time of the subject and the limited scope of knowledge acquired, there are still many shortcomings, which need to be continued.
References 1. Han, T.: The practice of Internet + big data management platform in coal smart enterprises. Coal Technol. 038(001), 178–180 (2019) 2. Liu, Y.: Design of unstructured data management platform for large enterprises. China Science and Technology Information 602(07), 68–69 (2019) 3. Dang, F., Mei, L., Gao, F., et al.: Research on data governance technology of electric power enterprises based on life cycle management. Electr. Power Big Data 022(003), 66–70 (2019) 4. Wu, S.: Challenges of public management in the era of big data and exploration of innovative models. Operator 033(017), 44 (2019) 5. Han, Z.: The application of data analysis in enterprise data management. Acc. Study 130(04), 181–182 (2016) 6. Wang, Y.: Analysis of the impact of big data on enterprise management decision-making. Shangxun 160(06), 84–85 (2019) 7. Liu, J.: Thoughts on the practice and application of enterprise standard information public service platform. Public Stand. 330(19), 246–247 (2020) 8. Hua, Y., Wang, L.: Research and Practice on Data Asset Management Methods of Tobacco Enterprises. J. China Tobacco Sci. 26(05), 123–131 (2020) 9. Li, X.: Analysis and research on master data management of traditional material trade enterprises. Railw. Purchas. Logist. 15(162(03)), 60–62 (2020) 10. Li, Y.: Analysis of the innovation path of enterprise management mode under the background of big data. Mod. Mark. (Bus. Edn.) 336(12), 82–83 (2020) 11. Li, D., Feng, S., Liu, G., et al.: Application research of equipment intelligent management platform for petroleum and petrochemical enterprises. China Equip. Eng. 443(07), 87–89 (2020)
Smart Micro-grid Double Layer Optimization Scheduling of Storage Units with Smog Factors Xiaojie Zhou1(B) , Zhenhan Zhou2 , Rui Yang1 , and Yang Xuan1 1 National Nuclear Power Planning and Design Institute Co., Ltd.,
Haidian District, Beijing 100089, China 2 Department of Electrical Engineering, Hebei University of Science and Technology,
Shijiazhuang, Hebei, China
Abstract. Smart micro-grid is an important part of the power system and is the basis for energy Internet concept. Smart micro-grid optimization operation scheduling problem is one of the core technologies of smart micro-grid. Since the smog weather has seriously affected the social production and life in recent years, environmental protection benefits have become an important consideration for the operation of the grid. How to properly manage the operation of the smart microgrid network within the influence of smog, the maximization of smart micro-grid economics, technology, and environmental efficiency has become an important research topic. This paper discusses the influence of energy storage configuration on the operation of smart micro-grid scheduling operations in the smart microgrid, and the smog effect is another variable factor, considering the smog factor and the fog two scheduling schemes, give the model and improved solving method for smart micro-grid double-layer optimization scheduling. The results of different scheduling schemes are analyzed by actual examples to maximize the economy, technology and environmental efficiency of smart micro-grid. Keywords: Smart micro-grid · Optimized scheduling · Smog factor · Master from double layer optimization · Improve particle group algorithm
1 Introduction Smart micro-grid refers to a small distribution system consisting of distributed power, energy storage devices, energy conversion devices, load, monitoring, and protection devices [1–3]. In the traditional study, the random scheduling model of wind power is obtained according to a large amount of prediction data [4, 5]. The sunlight is too dispersed, so the photovoltaic discharge is a blurred variable [6, 7]. In the smart micro-grid, the total power supply power and total load demand power of the power supply are dynamic changes, and it is not a balance between supply and demand every moment [8, 9]. Therefore, the energy storage system must be in the smart micro-grid. It is necessary to select the energy storage method according to the system stability [10]. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 711–722, 2022. https://doi.org/10.1007/978-981-16-5857-0_90
712
X. Zhou et al.
There is a large number of energy storage devices in the operation of smart microgrid, but rarely smart micro-grid and energy storage as two independent body layers. In addition to the operation of smart micro-grid, the maximum number of adjustments to the battery should also be considered, so that the scheduling of the battery has also achieved the optimal [11, 12].
2 Smart Micro-grid Optimization Scheduling Without Storage Unit 2.1 Solve Process Without Considering Haze Influencing Factors When the smog factor is not considered, the power balance is minimized with average load power supply unit, and the maximum minimum power limit is a constraint condition. The total power supply fee for all day: 96 d s d s d s 0.52 Pwi + 0.75 Pwi + Cbi Pbi − Csi Pwi · t + Pwi + Pwi + Pwi ρ= i=1
(1) In the formula, the Pwi d and Pwi s are the power of the power supply load of the fumet in the i-th period, respectively, and the power to be sold to the external grid. Units: kW; Pvi d and Pvi s are the power of photovoltaic power supply loads in the i-th period, respectively, and the power to sell to external grids, respectively. Unit: kW. Load average purchase unit price: 96 PDi · t (2) Cav = ρ/ i=1
2.2 Solve Process with Considering Haze Influencing Factors When the impact of smog factors, the “Government Financial Subsidy Rate γ DER ” is introduced to reduce the power generation cost of various types of distributed power supplies, and improve the renewable energy. ρ= 96 i=1
96 d + P s + 0.75 P d + P s d + P s ) · t Cbi Pbi − Csi (Pwi + (1 − γEDR ) 0.52 Pwi wi wi wi wi i=1
(3)
Load average purchase unit price is the same as formula (2).
3 Smart Micro-grid Double Layer Optimization Scheduling with Storage Unit Smart micro-grid is used as a battery, and the Smart micro-grid is limited to the exchange of power. It is still divided into two cases of smog factors and unspeakable and smog factors.Established the master from the double layer optimization model, the upper layer is at the lowest battery power supply cost. The constraint, the maximum power constraint of the unit time, and the constraints of the charge and discharge. The model simultaneously uses the constraint condition of the lower layer function [13, 14].
Smart Micro-grid Double Layer Optimization Scheduling
713
3.1 Solve Process Without Considering Haze Influencing Factors Upper Optimization Model. The target function: 96
Cbp Pdis,i Yi − Cop Pcha,i Xi · t min CSOC = i=1
(4)
In the formula, C SOC is battery energy storage cost, C op is the charging electricity price of the Smart micro-grid to the energy storage system; the C pb is the power supply price of the energy storage system to the smart micro-grid, C pb = C op + 0.2. Unit: ¥/kWh; Pcha.i and Pdis.i are the charge and discharge power of the battery in the i-th period, respectively. Unit: kW. The value of the initial battery charge state of the battery SOC is: T
Cbp Pdis,i Yi − Cop Pcha,i Xi · t/Eb (5) Si = S0 + t=1
In the formula, S 0 is the initial battery charge state of the battery. 1) Battery charge state SOC constraint: 0.3 ≤ Si ≤ 0.95
(6)
2) Charge and discharge status constraint: Xi · Yi = 0
(7)
3) Energy status cycle start end constraint: ST = S0 = 0.4
(8)
4) Battery charge and discharge power limit: 0 ≤ Pcha,i ≤ 0.2Eb Xi , 0 ≤ Pdis,i ≤ 0.2Eb Yi In the formula, E b is the battery capacity. Charging and discharge: 96 96 |Xi+1 − Xi | ≤ N1 , |Yi+1 − Yi | ≤ N2 i=1
i=1
(9)
(10)
In the formula, N 1 and N 2 are the number of times of charging and discharging of the battery, respectively. Lower Model. The target function: min Cav = 96 0.52 P d + P s + 0.75 P d + P s ] + C P − C (P d + P s ) + P · t si wi bi bi dis,i Cpb Yi − Pcha,i Cop Xi i=1 wi wi wi wi wi 96 i=1 PDi · t
(11)
In the formula, C av is a single price of load, unit ¥/kWh; PDi is the power of the i-th period of time. Unit: kW; Xi and Yi are the charging state and discharge state of the battery; Xi ∈ {0, 1},Yi ∈ {0, 1}; Δt is the unit time interval.
714
X. Zhou et al.
1) Power balance constraint d s c + Pwi ,0 ≤ Pwi ≤ Pwi PDi + Pcha,i − PLi = Pwi + Pvi + Pdis,i ,Pwi = Pwi
s s d s c PLi = Pbi + Pwi ,Pvi = Pvi + Pvi + Pvi ,0 ≤ Pvi ≤ Pvi
(12) (13)
In the formula, the Pwi c is the maximum value of the wind turbine in the i-th period.In the formula, the Pvi c is the maximum of the photovoltaic outlet force in the i-th period. In the formula, the PLi is the exchange of power of the Smart micro-grid and the external network in the i-th period. 2) Exchange power constraint: 0 ≤ PLi ≤ 150 kW
(14)
2) Constraint composition of the upper layer model. The Establishment and Solving of the Double Layer Optimization Model. In this paper, the improved particle swarm optimization model is solved for the optimal scheduling scheme of the double layer optimization model [15, 16]. On the basis of solving the conventional particle group algorithm, the penalty function is introduced, and the problem of constrained linear planning has become unconstrained. Nonlinear planning issues. Solving the power supply of each time period load and plots a chart, and then calculates the total power supply cost of all-day power supply and average purchase unit price. The particle population algorithm flow chart is shown in Fig. 1. All-day total power supply fee: 96 d s d s s s 0.52 Pwi + 0.75 Pwi + Cbi Pbi − Csi (Pwi + Pwi + Pwi + Pvi ρ= i=1
+ Pdis,i Cpb Yi − Pcha,i Cop Xi ] · t (15) Average purchase unit price is the same as formula (2).
3.2 Solve Process with Considering Haze Influencing Factors After the impact of the smog factor, the lower target function in the double-layer model introduced “Government Financial Subsidy γ DER ”. Upper layer model: min Cav = 96 1 − γ d s d s d s DER ) 0.52 Pwi + Pwi + 0.75 Pwi + Pwi + Cbi Pbi − Csi (Pwi + Pwi ) + Pdis,i Cpb Yi − Pcha,i Cop Xi · t i=1 96 i=1 PDi · t
(16)
Lower model is the same as formula (4).
Smart Micro-grid Double Layer Optimization Scheduling
715
The improved particle group algorithm using a penalty function is solved by the optimal scheduling scheme to obtain the power supply of each time period load and plot a chart, and then calculate the total power supply fee and average purchase electricity unit price. All-day total power supply fee: 96
0.52Pwi + 0.75Pvi + Cbi Pbi + Cop Pcha,i Xi (17) ρ= i=1
Average purchase unit price is the same as formula (2). Begin Random initialization of the position, particle speed, pbest, gbest, and iterative number t = 0 Evaluate the adaptivity value of each particle and its constraint violation, find the initial individual optimal position PBest and global optimal position GBest Initialize an empty external storage set, deposit the Pareto optimal solution in the initial particle group into the collection Update the position, speed of the particles, setting the dynamic inertia weight
Update population individual value PBEST with a constraint method based on penalty function processing constraints Find the Pareto optimal solution in the new particle population, incorporate it into an external archive collection, temporarily store the external set of particles into the temporary collection, clear the external archive collection, find the Pareto's best solution in the temporary collection and each store To external archive collection
Whether the number of external archives is the most optimal solution exceeds its capacity NC value
t=t+1
No Select GBest in the external collection
Yes
Calculate the congestion distance, cutting the decoction in the external collection
Small probability variation for particles No
Whether to meet the termination condition T = Tmax Yes Output Pareto optimal solution
End
Fig. 1. Particle population algorithm flow chart
4 Example Analysis Example description: Original data: 96 district load forecast, fan and photovoltaic force value of 96 points a day. Calculation requirements: calculation time is 24 h, time interval is 15 min. The fan installed capacity is 250 kW, and the power generation cost is 0.52 ¥/kWh. The photovoltaic installed capacity is 150 kW, and the power generation cost is 0.75 ¥/kWh. It is assumed that the battery loss is not measured, the battery is rated to 300 kWh, the battery SOC operating range is [0.3, 0.95], the initial SOC value is 0.4, and the charge to the discharge cost is 0.2 ¥/kWh, the total number of charge and discharges per day is 8 times. The electricity price of the electricity and purchase electricity is shown in Table 1.
716
X. Zhou et al. Table 1. Sale electricity price
Time
0:00–7:00 7:00–10:00 10:00–15:00 15:00–18:00 18:00–21:00 21:00–0:00
Selling 0.22 electricity price (¥/kWh)
0.42
0.65
0.42
0.65
0.42
Purchase 0.25 electricity price (¥/kWh)
0.53
0.82
0.53
0.82
0.53
4.1 Smart Micro-grid Calculation Result Without Storage Unit Solution Result Without Considering Haze Influencing Factor. Power supply in each time period is columnar, and the folding statistics are shown in Fig. 2:
Fig. 2. Power supply constitution chart of each time period
Solution Result with Considering Haze Influencing Factor. Power supply in each time period is columnar, and the folding statistics are shown in Fig. 3:
Fig. 3. Power supply constitution chart of each time period
Result Analysis. In terms of the effects of smog factors and the impact of smog factors, the two wind power, photoelectric, and purchase electricity volume were statistically compared, respectively, and then analyzed.
Smart Micro-grid Double Layer Optimization Scheduling
717
Fig. 4. Wind Power, Purchase Power, PV supply constitution chart of each time period
As can be seen from Fig. 4, after considering the influence of smog factors, the renewable energy utilization rate is significantly increased. In addition to 0–28 h, the rest of the time period is the highest cost, and the wind power, photoelectric, and power supply should be selected in turn, and the actual power supply is constituted as wind power, or the wind power and external purchase according to each time period. Three combinations. There is a negative value in 28–55 h, indicating that the electricity price of the electricity during this period is higher than that of the fan power generation, and the electric solder out of the wind turbines. After counting the haze factors, the renewable energy power utilization is significantly improved, mainly in 0–55 h, 61–72, photovoltaic fluctuation is significantly improved, and the wind turbines in the 29–40 time also increase.
4.2 Smart Micro-grid Calculation Result with Storage Unit Calculation Results and Analysis Under the Conditions of Renewable Energy. By calculating, when the hook is not affected, the smart micro-grid of the storage unit is powered by each time period under the full utilization of renewable energy.
Fig. 5. Power supply constitution chart of each time period
Consider three programs, program one: “non-storage battery, unconstrained, full utilization”. program two: “non-storage battery, unconstrained, abandonment renewable energy”. program three: “battery, constrained, and full utilization”. Combined with the aforementioned calculation results, there is a statistical comparison of wind power, photoelectric, and power supply in each time period of the three programs, and then analyze.
718
X. Zhou et al.
Fig. 6. Statistical map of wind power and PV under three programs
Fig. 7. Statistical map of purchase power under three programs
1) Power supply composition analysis of each time period: From Fig. 5, 6 and Fig. 7, the comparison of the renewable energy is fully utilized, after considering the battery action, since the power supply cost is low, the battery supply is preferentially considering, but due to the limit of the battery charge and discharge, only certain There is a battery-powered state in the time period. 2) Analysis of total power supply in all day: Compared the above-described calculation results, since the cost of the battery is considered and the cost of charging to the discharge process is low, the total power supply cost should be reduced. However, it is raised the power supply costs in the 0: 00–7: 00, and the power supply cost is caused by the phenomenon of purchasing electricity at 0: 00–7: 00–7: 00, the power supply cost is increased in 0: 00–7: 00, and the power supply cost is increased. In general, the total power supply fee is reduced due to the energy storage. 3) Load average purchase unit price analysis; Affected by total power supply costs all day, the average purchase unit price is reduced by the energy storage. 4) Effect of battery participation in regulation: Considering the battery adjustment, restricting the exchange of Smart micro-grid and grid, the wind power in 1–40 times has increased significantly, and the photovoltaic fluctuation increases during 20–55 times, because the renewable energy is fully utilized, take into account The price will try to use renewable energy power generation. Although there is a battery involvement to adjust but the battery capacity is limited. When the wind is abandoned, although it can be selected from the external network from the external network, but the exchange power is limited, only within the constraint It is possible to prioritize a relatively low cost of cost, followed by photovoltaic power generation, so the purchase power is reduced.
Smart Micro-grid Double Layer Optimization Scheduling
719
Calculation Results and Analysis under the Abandoned Renewable Energy Conditions. Power supply in each time period is as follows.
Fig. 8. Power supply constitution chart of each time period
Consider three programs, program one: “non-storage battery, unconstrained, abandoned renewable energy”. program two: “battery, constrained, full utilization”. program three: “battery, constrained, abandoned renewable energy”. Combined with the aforementioned calculation results, there is a statistical comparison of wind power, photoelectric, and power supply in each time period of the three programs, and then analyze.
Fig. 9. Statistical map of wind power and PV under three programs
1) Power supply composition analysis of each time period: It can be seen from Fig. 8, 9 and Fig. 10 that after considering the role of the battery, the battery power supply will be given priority due to its lower power supply cost. However, due to the limitation of the number of battery charging and discharging times, only in certain periods The battery power supply status appears. After allowing abandonment of wind and light, the purchase price of electricity in the period of 10:00–15:00 and 18:00–21:00 is the highest, and battery discharge can be given priority. Then consider renewable energy power supply, and since the cost of wind turbines and photovoltaic power generation during the period of 0:00–7:00 are higher than the purchase price, considering that
720
X. Zhou et al.
Fig. 10. Statistical map of purchase power under three programs
the battery has a limit on the number of charge and discharge times, it is preferred to purchase electricity from the external grid. 2) Analysis of total power supply in all day: Because the role of the battery is considered and the cost of the battery from charging to discharging is very low, when abandoning wind and light is allowed, the actual power purchase price at each time period can be based on the relationship between the actual power purchase price and the renewable energy generation cost to choose the power supply structure, and when the price of electricity is higher than the cost of power generation, the micro-grid can sell electricity, so the total power supply cost throughout the day is reduced. However, because the exchange power between the micro-grid and the grid is restricted, the purchase and sale of electricity is restricted. Compared with the unconstrained situation, the power supply cost is increased. 3) Load average purchase unit price analysis: When the abandonment of renewable energy is allowed, the purchased electricity will definitely increase during the time when the electricity purchase price is low. However, due to restrictions on the exchange power, and the need to consider selling electricity to the grid during the time when the electricity sale price is higher than the generation cost, So comparing the calculation results in question two, the purchase and sale of electricity is restricted. 4) The utilization of renewable energy and the impact of battery participation in regulation: In particular, a low-cost battery regulation method is added, and the utilization rate of renewable energy is not high, and the utilization rate is the lowest during the period of 0:00–7:00. However, because the cost of wind power generation is less than photovoltaic power generation, the utilization rate of wind power is higher than that utilization rate. However, after taking into account the haze factor, the utilization rate of renewable energy power generation has increased significantly, mainly in the period of 7: -10: 00, 15:00–18:00, and 21: 00–00:00 has seen an increase.
5 Conclusions When the smart micro-grid is not equipped with energy storage, and there is no restriction on the power exchange between the smart micro-grid and the grid, and the abandonment of wind and solar is allowed. By drawing a statistical diagram of the power supply composition in the two cases for analysis, it can be seen that after taking into account
Smart Micro-grid Double Layer Optimization Scheduling
721
the influence of haze factors, the utilization rate of renewable energy has been improved, and the environmental benefits have been enhanced. When the smart micro-grid is equipped with batteries as energy storage and the power exchange between the smart micro-grid and the grid is restricted, whether the renewable energy is fully utilized as a variable, discuss the changes in the dispatch plan before and after taking into account the influence factors of haze. It shows that when the battery participates in the regulation, the power supply cost is reduced. After taking into account the influence of haze, the operating cost of the smart micro-grid has been further reduced, and the utilization rate of renewable energy has increased. Acknowledgements. Thanks to Rui Yang and Yang Xuan for their careful guidance during the writing of this article. Thanks to Zhenhan Zhou for their technical opinions on this article. Thanks to my husband for his love and support for me.
References 1. Xu, Y., Wang, C.: Optimal dispatch of micro-grid with electric vehicles considering the benefits of state of charge. J. North China Electr. Power Univ. (Nat. Sci. Ed.), 1–7, 28 May 2021. http://kns.cnki.net/kcms/detail/13.1212.TM.20201225.1348.002.html 2. Liu, G., Bai, X., Diao, T.: Optimal dispatch of biogas-wind-light integrated energy micro-grid considering gas-power network architecture. Power Syst. Clean Energy 36(12), 49–58 (2020) 3. Huang, H., Cai, L., Xiao, T., Ye, Y., Gao, X.: Research on multi-objective optimal dispatching of grid-connected micro-grid based on NSGA-II improved GSO algorithm. J. Sichuan Univ. Light Chem. Technol. (Nat. Sci. Ed.) 33(06), 24–31 (2020) 4. Li, Q., Huang, L., Qiu, Y., Sun, C., Fu, W., Chen, W.: Two-stage robust dispatch optimization of AC/DC hybrid micro-grid with EVs. J. Southwest Jiaotong Univ., 1–8, 28 May 2021. http:// kns.cnki.net/kcms/detail/51.1277.U.20201215.1733.008.html. 5. Wu, C., Sui, Q., Lin, X., Wang, Z., Li, Z.: Scheduling of energy management based on battery logistics in pelagic islanded micro-grid clusters. Int. J. Electr. Power Energy Syst., 127 (2021) 6. Liang, X.: Research on Optimal Dispatching Method of Active Distribution Network Based on micro-grid Interaction. Nanjing University of Posts and Telecommunications (2020) 7. Wu, J.: Optimal dispatch of micro-grid economy considering the uncertainty of wind power photovoltaic output. Nanjing University of Posts and Telecommunications (2020) 8. Zhang, Y.: Research on micro-grid Optimal Dispatching Method Considering Intermittent Energy Uncertainty. Nanjing University of Posts and Telecommunications (2020) 9. Tang, W.: Research on Cooperative Optimal Dispatching of Multi-micro-grids Oriented to Distribution Network. Nanjing University of Posts and Telecommunications (2020) 10. Wang, J.: Research on Distributed Economic Dispatching Method of micro-grid under Cyber Attack. Nanjing University of Posts and Telecommunications (2020) 11. Zhang, J.: Micro-grid Economic emission dispatch considering demand side response index. J. Northeast Dianli Univ. 40(06), 1–10 (2020) 12. Li, Y., Ge, Y., Xu, Z., Tang, C., Zhu, C.: Research on micro-grid optimal dispatching considering electric vehicle charging queuing. J. Anhui Univ. Eng. 35(06), 26–32 (2020) 13. Wei, B.: Research on Optimal Dispatching of Energy Storage System in micro-grid. Nanchang University (2020) 14. Chen, J.J., Qi, B.X., Rong, Z.K., Peng, K., Zhao, Y.L., Zhang, X.H.: Multi-energy coordinated micro-grid scheduling with integrated demand response for flexibility improvement. Energy 217 (2021)
722
X. Zhou et al.
15. Hein, K., Xu, Y., Gary, W., Gupta, A.K.: Robustly coordinated operational scheduling of a grid-connected seaport micro-grid under uncertainties. IET Gener. Transm. Distrib. 15(2) (2020) 16. Cheng, Q., Yan, Y., Liu, S., Yang, C., Chaoui, H., Alzayed, M.: Particle filter-based electricity load prediction for grid-connected micro-grid day-ahead scheduling. Energies 13(24), 6489 (2020)
Research on Indoor Location Algorithm Based on Cluster Analysis Fenglin Li(B) , Haichuan Wang, Jie Huang, Hanmiao Shui, and Junming Yu Wuhan Technology and Business University, Wuhan 430000, China
Abstract. The positioning system based on received signal strength (RSSI) is easily affected by the environment. This paper proposes an optimization strategy of RSSI signal processing based on Gaussian mixture filter (GMF) of clustering algorithm analysis. The unknown nodes are accurately located in the room by optimizing the received signal strength and distance correction of four side centroid positioning algorithm, and the Bluetooth 4.0 beacon nodes are used for field experiments. The experimental results show that the algorithm can effectively improve the ranging accuracy and the positioning accuracy of the system, which is 34.6% higher than that of the traditional weighted centroid algorithm, and the average positioning error is less than 0.5 m, which can meet the requirements of indoor positioning accuracy. Keywords: Received signal strength (RSSI) · Cluster analysis · Weighted centroid algorithm · Distance correction
1 Introduction Indoor positioning refers to the use of wireless communication, base station positioning, inertial navigation positioning and other technologies to achieve position positioning in the indoor environment, so as to realize the position monitoring of indoor personnel and objects. People’s daily life and work are mostly indoors. With the rapid development of mobile Internet and intelligent devices, indoor positioning technology has become the bottleneck of o2o, smart home, indoor robot and other applications. The application of indoor positioning technology has an urgent demand and broad application prospects [1]. At present, the mainstream indoor positioning technology can be divided into two categories: non ranging and ranging based positioning algorithm. The former mainly estimates the distance through the connectivity between nodes and the origin of multiple paths, and has high hardware requirements, mainly including centroid algorithm, DV hop algorithm, approximate trilateral interior angle test algorithm APIT, etc.; the ranging based algorithm mainly measures the distance, azimuth and other information of adjacent sensor nodes, and uses trilateral measurement, triangulation, triangulation and other methods The location algorithm such as maximum likelihood estimation establishes a mathematical model to estimate the location of the node, so as to obtain the actual © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 723–730, 2022. https://doi.org/10.1007/978-981-16-5857-0_91
724
F. Li et al.
location information of the unknown node. The location algorithm based on non ranging only stays in the stage of theoretical research, and most of them are carried out in the simulation environment, so many uncertain factors need to be assumed, and these factors are often not satisfied in practical application, so the ranging algorithm is usually used in practical application. In practical application, due to the influence of multipath attenuation, environmental noise, signal transmission reflection, diffraction, diffraction phenomenon and antenna gain, the traditional RSSI ranging method has great defects, and the fault tolerance ability of the algorithm for ranging accuracy is limited. If the subsequent positioning algorithm is based on this ranging method, the final positioning result will inevitably lead to large errors. Moreover, most of the existing localization algorithms are tested in MATLAB, NS2 and other simulation software, rarely in the actual environment, which can not fully reflect the authenticity of localization algorithm, especially in the complex indoor environment based on RSSI scheme.
2 RSSI Optimization Strategy of Gaussian Mixture Filter Based on Clustering Algorithm 2.1 Wireless Signal Transmission Model The principle of RSSI ranging is that with the increase of distance, the signal strength of the wireless signal is also attenuated. According to the shading model, which is widely used in wireless signal transmission, it has logarithmic attenuation characteristics. The formula is as follows: d +ε (1) RSSI = RSSI0 + 10nlg d0 Among them, D. Is the reference distance (generally 1 m); RSSI. The distance is d. The strength of the received signal at the time of operation; DL is the actual distance; RSSI is the received signal strength when the distance is d; N is the wireless signal attenuation factor closely related to the environment; E is a Gaussian random variable with zero mean [2]. In the actual process of wireless signal transmission, it is easy to be affected by environmental factors, such as multipath, diffraction, obstacles, temperature and humidity changes. In order to make the RSSI ranging model reflect the propagation characteristics of indoor experimental environment as truly as possible, this paper uses the least square method to carry out quadratic regression fitting for the signal strength RSSI values under different distances, and obtains the logarithmic path loss model formula which is most suitable for the environment, so as to improve the ranging accuracy in the experimental environment. 2.2 RSSI Signal Processing Optimization Strategy When using RSSI for ranging, due to the existence of interference in indoor environment, multipath effect, NLOS and other influences, RSSI values at the same distance of the
Research on Indoor Location Algorithm Based on Cluster Analysis
725
Fig. 1. Distribution of RSSI sampling values
same beacon often fluctuate greatly, as shown in Fig. 1, these outliers will interfere with the positioning accuracy. In this paper, the RSSI signal processing optimization strategy of Gaussian mixture filter based on clustering algorithm analysis is designed to filter out the noise caused by multipath effect and non line of sight, and eliminate the error caused by signal scattering, multipath and other factors to the experimental results, so as to improve the ranging accuracy and enhance the accuracy of positioning information [3]. The optimization strategy of Gaussian mixture filter is to sample the RSSI values of the same Bluetooth beacon node at the same distance, and use the maximum expectation algorithm M-step (maximization step) of clustering algorithm to cluster the sampled data. That is to say, GMM (Gaussian mixture model) is used to analyze the sampled data, and the sampled data is decomposed into several Gaussian probability density function models, Then, according to the Akaike information criterion (AIC), the clustering method suitable for the distribution of RSSI sampling data is selected to optimize the sampling value, and the average value of the optimized sampling value is calculated. According to the practical test experience, the Gaussian mixture filter model includes three kinds of Gaussian probability density function models at most, that is, the sampling value distribution filter model includes three cases: one component, two components and three components. In the actual experiment, the case that the distribution of sampling values is more than three components is a small probability event, and the effect of using the three component model is ideal.
726
F. Li et al.
3 Four Edge Weighted Centroid Localization Algorithm Based on Optimized Weight and Distance Correction Based on the wireless signal propagation model, the RSSI value can be used to calculate the distance between the Bluetooth beacon node and the mobile terminal. When the mobile terminal obtains the distance values of at least three Bluetooth beacon nodes, the centroid location algorithm can be used to estimate the location and obtain the location information of the mobile terminal [4]. This paper analyzes the traditional centroid algorithm and weighted centroid algorithm, optimizes the weight factor, and modifies the distance of the positioning algorithm on this basis, which improves the adaptability and accuracy of the positioning algorithm in practical application. 3.1 Traditional Centroid Location Algorithm Centroid algorithm is a simple algorithm independent of distance and based on connectivity, which locates nodes according to the received signal strength and connectivity. Assuming that the coordinates of the three beacon nodes are: (. Xa, ya), (. ZB,. Yb), (. CC, YC), the unknown node coordinates of the traditional centroid algorithm are: x = xA +x4B +xC (2) y = yA +y3B +yC This centroid algorithm needs simple equipment, is easy to implement, and is less affected by the environment. However, this algorithm ignores the influence of beacon nodes on unknown mobile terminals, that is, when the RSSI values received by each beacon node are different, the decision weight of beacon node coordinates on unknown nodes is also different. 3.2 Weighted Centroid Location Algorithm Weighted centroid algorithm calculates the weight of each beacon node’s contribution to the unknown node through RSSI value. The specific algorithm is: because the RSSI value is easily affected by environmental interference and irregular attenuation of electromagnetic signals, there must be errors when it is converted into distance, so the perceptual intersection of beacon nodes is not a point, but a region, and the unknown node is in the region. As shown in Fig. 2, there are three known beacon nodes: O (. Ti, Y1), O2 (T2,. Y2), OS (. X, y;), The distance between point D and three beacon nodes is R1, R2 and 73. According to the mathematical model of distance and coordinates from unknown node to beacon node. Find the intersection of two circles, get the intersection a (TA, ya), B (B ·. Yb), C (C, YC), the unknown node is in the triangle ABC region. Weighted centroid algorithm introduces a weight in each location estimation to prevent the phenomenon of information flooding (that is, the influence factors of beacon node’s related information on centroid coordinate estimation). The weight is related to distance, and the distance factor is used to reflect the influence of beacon node on unknown node (that is, the farther the beacon node is from the unknown node, the smaller the proportion of position estimation), and each vertex is determined by two distances.
Research on Indoor Location Algorithm Based on Cluster Analysis
727
Fig. 2. Trilateral localization algorithm
3.3 Indoor Positioning Algorithm Based on Distance Correction Due to the interference of multipath attenuation, obstacles and other noises, the signal strength value often fluctuates greatly, which makes the distance value converted from RSSI value larger error. As a result, the distance between unknown node and beacon node based on RSSI ranging method is far from the actual distance, so that the three circles can not meet the requirements of pairwise intersection, and the weighted centroid localization algorithm is invalid. To solve this problem, this paper proposes a weighted centroid localization algorithm based on distance correction, which can accurately locate the nodes when the circles do not intersect, and improve the fault tolerance, adaptability and accuracy of the localization algorithm in indoor environment [5]. The principle of RSSI distance correction used in this paper is: the radius of two circles is increased by taking the distance factor as the weight, so that the circles intersect each other to form an overlapping area, and this method can ensure that the increased radius ratio remains unchanged from the previous radius ratio, that is, the weight of the distance factor to the unknown node remains unchanged. Then the weighted centroid trilateral localization algorithm is used to obtain the unknown node coordinates. When two circles are separated from each other, that is, there is no intersection between the two circles, as shown in Fig. 3. The radius increasing scheme is shown in Eq. (3): 1 −r2 ) + r1 r1 = r2 (dr−r 1 +r2 (3) 1 −r2 ) r2 = r2 (dr−r + r2 1 +r2 Where R1 and R2 are the radii of O and O2, and D is the distance between the centers of the two circles. The traditional weighted centroid trilateral positioning algorithm uses three beacon nodes for positioning, and the positioning results have large errors. In order to further improve the positioning accuracy of unknown nodes, through the analysis of theory and practical application, this paper proposes a four side ranging positioning algorithm, that is, adding a beacon node to the trilateral positioning algorithm for positioning.
728
F. Li et al.
Fig. 3. Two circles separated
4 Positioning Experiment and Result Analysis The system consists of four Bluetooth beacon nodes and a mobile intelligent terminal, raspberry pie. The core controller of Bluetooth beacon node is ti’s cc2540 Bluetooth chip, which uses the latest Bluetooth 4.0 technology, and has the characteristics of low power consumption, low delay and long transmission distance, which can meet the requirements of high-precision indoor positioning technology to the greatest extent. Raspberry pie mobile terminal is a microcomputer motherboard based on arm, which supports Linux system. Raspberry pie receives the location related UUID, RSSI value and power value (RSSI value at LM away from the terminal) sent by the Bluetooth beacon node through the Bluetooth module. The unknown node coordinates can be located through the weighted centroid positioning algorithm. The experimental principle of positioning based on Bluetooth beacon node and raspberry pie is shown in Fig. 4.
Fig. 4. Schematic diagram of positioning experiment based on Bluetooth beacon node and raspberry pie
Research on Indoor Location Algorithm Based on Cluster Analysis
729
4.1 Establish Wireless Signal Transmission Model In this experiment, a 6 m * 8 m conference room is selected as the experimental site. The conference room is a real working environment, in which there are built-in tables and chairs. The built-in objects in the environment will produce multipath effects on wireless signals. Firstly, the ranging experiment was carried out: the raspberry pie was fixed in the room, and then Bluetooth beacon nodes were set at 0.5 m, 1 m,…, 8 m away from the raspberry pie, with a total of 16 test points. At each test point, raspberry pie was used to sample RSSI values. Bluetooth beacon nodes broadcast every 0.1 s. in the test, each test point was sampled for 20 s, and 200 RSSI sample points were obtained. Then, Gaussian filter and mean filter are used to process the collected data, and the RSSI value with large fluctuation is removed, and the more accurate RSSI value is finally obtained. After 16 test points are measured, the wireless signal transmission model suitable for the experimental environment is obtained. 4.2 RSSI Signal Processing Optimization Experiment and Analysis The test program in this paper is based on the bluz protocol stack under Linux. It uses the intelligent terminal raspberry pie, Bluetooth beacon node and a set of c-developed upper computer program as the development platform, and selects a 3 m * 7 m open corridor as the experimental site. The AB side is the wall (the inner metal structure is easy to cause multipath effect), and the CD side is the open field of vision. Four Bluetooth beacon nodes are respectively placed in the (0,0), (7,0), (7,3), (0,3) coordinates of the experimental area, RSSI is sampled and filtered by using raspberry pie of mobile terminal, and the optimized RSSI value is obtained. According to the logarithmic distance loss model fitting in Sect. 4.1, the corresponding RSSI distance value is calculated.
5 Concluding In view of the phenomenon that the indoor complex environment makes the wireless signal appear multipath effect, this paper proposes a distance correction indoor positioning algorithm based on cluster analysis optimization, and uses the optimization strategy of Gaussian mixture model based on cluster analysis to improve the ranging accuracy. Based on the traditional positioning algorithm, considering the failure of trilateral positioning algorithm caused by RSSI ranging error, the distance correction scheme is used to improve the relative accuracy of positioning, that is, to reduce the positioning error. The experimental results show that. Acknowledgements. Intelligent Mobile Call Platform, 2017 Hubei Province College Students Innovation and Entrepreneurship Training Program, Project No. 201713242022Y.
References 1. Lan, W., Gu, C., Luo, W., Sang, F.: Research on WiFi indoor location algorithm based on quadtree. Naval Electron. Eng. 41(05), 104–108 (2021)
730
F. Li et al.
2. He, J., Guo, K., Zhang, M.: Indoor localization algorithm based on improved IMM motion model. Sens. Microsyst. 40(05), 150–153 (2021) 3. Luo, Y., Hou, L.: Indoor location algorithm for multi-directional packet WiFi. Sens. Microsyst. 40(04), 139–141 + 145 (2021) 4. Liu, T., Du, X., Huang, K., Li, J.: Overview of high precision indoor positioning algorithm and technology. Electron. Test (05), 73–75 (2021) 5. Wan, X.: Research on indoor location algorithm based on feature transfer. Mod. Inf. Technol. 5(02), 44–48 (2021)
Construction of Tourism Management Information System Based on Django Ping Yang(B) Department of Jewelry and Tourism Management, Yunnan Land and Resources Vocational College, Kunming, China
Abstract. With the improvement of social material living standard, domestic tourism and outbound tourism are growing rapidly. In order to improve the quality of tourism service and grasp the business information timely, accurately and quickly, it is necessary to build a tourism management information system. The storage and processing of tourism information system is information related to tourism business, which is a kind of business decision support system. With the development of information technology and on the upgrade of e-commerce, with the emergence of a number of three party security payment providers such as Alipay and Paypal, tourism is now increasingly in need of information and online services. This will speed up the construction of domestic tourism management information system. Tourism management information system will be widely used in tourism industry. Keywords: Django · Distributed database · Dijkstra algorithm
1 Technical Route and Research Plan 1.1 MVC Design Pattern MVC (model view controller) mode is a widely used design pattern in software engineering. It divides a web application into three levels, namely model level, view level and control level, so as to separate the input processing, interface display and control flow of web application. This method realizes the function division of web application, separates the view layer from the model layer, and realizes the loose coupling of web application. 1.2 Django Framework Django framework is an open source web framework written in Python language. It encourages the rapid development of web applications. It can be used to build dynamic websites, manage website data interfaces, and follow the design pattern of MVC. In fact, it adopts the design patterns of MVT, namely model, view and template. Python is widely used in the Django framework. The main goal of Django framework is to develop © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 731–736, 2022. https://doi.org/10.1007/978-981-16-5857-0_92
732
P. Yang
database driven websites easily and quickly. It pays great attention to the reusability of models and templates and don’t repeat yourself. Django framework mainly includes: an object relational mapping (ORM), which connects the data model with a specific database to get a simple and easy-to-use database API; a URL distributor based on regular expressions, which does not have a specific URL distributor; a view system for handling HTTP requests; an inheritable template system; and a simple and easy-to-use database API; An independent, lightweight web development server; a form serialization and validation system to achieve the validity of the form, can easily generate the corresponding form from the model instance; a cache system to achieve the required granularity; a session to check user permissions; built in an international system, can quickly develop a multi language website [1]. 1.3 MVC Pattern in Django Framework In the Django framework, when the URL is requested, it turns to its specified method. After processing through business logic, the page is displayed by calling the template system. The Django design team calls this implementation method MVT (model view template) framework. Django. DB. Model. Model implements the data model that needs to be created. This data model defines the object properties stored in the database and provides rich API for accessing data objects. In the Django framework, it also provides powerful template parsing function and outputs page response by processing page functions. In this way, web application designers pay more attention to the composition of the view layer.
2 Business Analysis 2.1 Requirement Analysis a) The system can handle normal tourism business, including customer booking, unsubscribing, payment, complaints and other tourism business. b) The database should be able to accommodate all the travel route information, employee information, tour group information, tourist information, scenic spot information, travel documents, etc. operated by the tourism company, so as to meet the needs of inquiry and update. c) Each branch deals with local business, such as booking or unsubscribing routes, modifying travel routes, etc. the head office is responsible for issuing new tourist routes, managing employees, and checking the integrity and consistency of global data [2]. d) The system can provide a reference scheme for the company to plan new tourism routes, and may provide a reference scheme for customers to book tickets. e) To improve the work efficiency of tourism companies, ensure the accuracy and standardization of information, reduce the workload of relevant personnel, scientifically manage and evaluate employees, reasonably plan the business volume of the company, and meet the requirements of tourism market standardization.
Construction of Tourism Management Information System Based on Django
733
2.2 Some Business Processes In order to ensure the validity and integrity of the order data, the system will query the booking customer information and travel route information according to the valid information of the new order. Judge whether the order information already exists in this record. If it does, it is repeated order. Otherwise, it should be judged whether the order has passed the approval according to the business logic. If not, the order will be marked as invalid. Otherwise, the order should be marked as effective. At the same time, the information entered in this order will be inserted into the booking customer information table, travel group information table and other data tables in the physical database. 2.3 System Use Case Diagram and Data Flow Diagram According to the company’s business process and the role assigned by employees, this paper first analyzes the operation cases related to administrators, employees (sales, drivers and Tour Guides) and customers. Use case diagrams are shown in Fig. 1 and Fig. 2.
Fig. 1. Administrator use case diagram
Fig. 2. Customer use case diagram
734
P. Yang
In the user use case diagram, only users with the identity of admin are allowed to delete the data table information, which can avoid the wrong deletion of data to a certain extent.
3 Database Implementation 3.1 Django Framework Development Environment After installing the Django framework, check whether the Django framework is successfully installed and the version information. You can import the Jango module and output the current Django version information. Django framework provides a Django adminpy script file to manage web applications. The startproject command can be used to quickly create web applications. Enter the project name webtour. The command is as follows: Django admin py startproject web tour [3]. Django websites can use the startapp subcommand in manage and py to create multiple applications. At the same time, three script files are automatically generated: init_Py, models. Py and views. Py models.py Script file to define the data model saved in the database. The data model can be defined as a set of related objects (the relationship between classes, attributes, etc.). By modifying models. Py, we can not only create the data model, but also add many data types that cannot be defined by MySQL database fields. To a certain extent, it can reduce the workload of database programming. Django model executes Python statements instead of SQL statements. Django uses models. Py to execute SQL statements in the background, and describes the results with Python data types. Even some high-level concepts that SQL cannot handle can also handle the mapping relationship of one to many and many to many [4]. 3.2 Distributed Requirements Analysis According to the company’s data table, such as data table, business analysis table. The frequency table gives the frequency of each application operation activity on each branch site. In fact, different orders will not affect the operation of different regions. The application a (reservation), B (cancellation) and C (modification) are described. The information tables directly accessed by these applications include customer information table, travel route information table and tour group information table. The information tables that can be modified include customer information table and travel group information table. The frequency table is shown in Table 1. The basic partition table of the group entity takes the branch company of the group as the division criterion, and the predicate selectivity gives the percentage of the group entity tuples with the possible partition attribute values. The major of tourism management focuses on the orientation of management. If you work as a waiter and tour guide after graduation, this position is too low. At most, in order to train us, the teacher will let us go through this hard internship period during the junior year’s internship. The purpose is to let you understand and understand the basic details of this profession, that is, start from the foundation, and then you can be convinced by
Construction of Tourism Management Information System Based on Django
735
Table 1. Frequency table Operation Branch area Shanghai Nanjing Hangzhou Reserve
20000
120000 95000
Cancel
1980
200
240
Modify
3300
360
400
virtue. But it’s not about positioning you right here. Most of them are excellent students of this major. After graduation, they are engaged in hotel management. Of course, the more advanced ones are private tour route customizers. (only professional enjoyment and customized planning that you can’t imagine. Of course, it’s so easy to open a special travel agency.) Now people’s living standards are gradually improving, and there will be more and more demand for warm-hearted service industries such as relaxation holidays and pension services. Tourism management is a hot, profitable and fun major. Of course, it still depends on everyone’s mentality. At most, you can’t carry it and want to give up. With such a promising future, how can it be so easy? Tourism is a kind of fun, and management is an art. I hope you can face up to this major, but it’s still improving step by step, but you can’t deny how much will change in ten years. Tourism management major needs: solid tourism literacy as the foundation, excellent humanities and social science theory as the background, excellent humanistic feelings as the guidance. The most important thing is that you love this major. My major is tourism management. At first, when I was a freshman, I didn’t choose tourism management because of the score limit of college entrance examination. At that time, tourism management was the best major in our university. Later, when I was a sophomore, I finally entered the dream of tourism management, The original intention of learning tourism management is very simple. Before college, he seldom went out of the house. The “tourism” brought by this major seems to meet this dream. The simple dream of a boy in his early 20s is to “travel all over China and around the world”; Later, the facts tell us that specialty and industry are two things in a strict sense, but it also hinders my love for this major. Especially after four years of professional study and the experience of entering a leading company in the industry after graduation, I have come into contact with more different hotel groups, travel agencies, DMC, tourism bureaus, OTAs, airlines, start-ups and other rich business forms, Still insist that this industry has a bright future; Many students of this major will feel that this major is very watery and its positioning is very awkward. It is not a technical direction with strong theoretical nature. It is similar to it / chemistry. There is a professional threshold for their jobs, and they are not really dry goods for learning marketing and operation.
736
P. Yang
4 Conclusion Through the construction of the whole system, the functions of portal page and management page based on template, digital storage of information reports, instant and rapid release of tourism information reports, scientific planning of tourism routes, and realtime scientific statistics of employees’ business level are completed. At present, there is almost no domestic use of Django framework to build tourism management information system, but Django’s own advantages are very obvious, easy to maintain, easy to adapt to changes in demand, can quickly release tourism information, at the same time, the system also has an efficient security communication mechanism.
References 1. Principles and paradigms of distributed systems (translated by Xin Chunsheng and Chen Zongbin), pp. 287–334. Tsinghua University Press, Beijing (2008) 2. Engelbrecht, A.P.: Introduction to computer intelligence, 2nd edn., translated by Tan Ying et al. Tsinghua University Press, Beijing (2010). 274312 3. Dorigo, G.: Ant colony system: a cooperative learning approach to the traveling salesman problem. IEEE Trans. Evol. Comput. 1, 53–66 (1997) 4. Elmari, R., Navath, S.B.: Basis of database system, 4th edn., translated by Zhang Ling, Yang Jiankang, Wang Yufei, pp. 356–376. People’s Posts and Telecommunications Press, Beijing (2009)
Analysis and Research of Artificial Intelligence Technology in Polymer Flooding Scheme YinPing Huo(B) Daqing Oilfield Limited Company, No. 7 Oil Production Company, Heilongjiang 163517 Daqing, China [email protected]
Abstract. In recent years, the research of polymer flooding technology at home and abroad has made great progress. The research and application of polymer flooding mechanism, geological conditions and polymer flooding scheme are introduced in detail. This paper focuses on the discussion of the geological conditions and mechanism of polymer flooding, and then puts forward the polymer technology scheme suitable for China’s oil displacement. Keywords: Polymer · Oil displacement · Condition · Scheme · Artificial intelligence
1 Introduction At present, there is no consensus on the mechanism of polymer flooding, but it is generally believed that the mechanism of polymer flooding is relatively simple compared with other chemical flooding, that is, the polymer improves the water oil mobility ratio, adjusts the injection profile, expands the swept volume and improves the crude oil recovery by increasing the viscosity of injected water and reducing the water phase permeability of reservoir [1]. The polymer injected into the reservoir will play two important roles, one is to increase the viscosity of water phase, the other is to decrease the permeability of the reservoir due to the retention of polymer. As a result of the joint action of the two aspects, the mobility of polymer aqueous solution in the reservoir is obviously reduced. Therefore, after polymer injection into the reservoir, there will be two basic mechanisms: one is to control the water phase fluidity in the water flooded interval, improve the water oil fluidity ratio, and improve the actual oil displacement efficiency of the water flooded interval; the other is to reduce the total fluidity of the fluid in the water flooded interval with high permeability, narrow the water line advance speed difference between the high and low intervals, adjust the water absorption profile, and improve the actual sweep efficiency. The basic mechanism of polymer flooding is to improve oil displacement efficiency and expand sweep volume. There are two main functions. The flow around it is very important. Because the polymer entering into the high permeability layer increases the seepage resistance of the water phase, resulting in the pressure difference © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 737–742, 2022. https://doi.org/10.1007/978-981-16-5857-0_93
738
Y. Huo
from the high permeability layer to the low permeability layer, which makes the injection fluid flow around and into the medium and low permeability layers, expanding the swept volume of the injection water drive. Second, profile control. Because the polymer improves the water oil mobility ratio and controls the seepage of the injection fluid in the high permeability layer, the injection fluid advances forward at a more uniform speed in the high and low permeability layers, improves the water absorption profile in the heterogeneous layer, and achieves the effect of enhancing the oil recovery.
2 Feature Extraction and Classification Gray level co-occurrence matrix is a common method to transform images into matrix data, which is mainly based on a specific regulation. Gray level co-occurrence matrix is also known as gray level spatial dependence matrix. The moment of inertia and energy are calculated as follows [2]. Moment of inertia (image clarity and texture depth). L−1 L−1 (i − j)2 P(i, j, d , θ ) (1) f1 i=0
j=0
Energy (i.e. second moment of angle, uniformity of image gray distribution and texture thickness) (Fig. 1). L−1 L−1 2 f2 = P(i, j, d , θ ) (2) i=0
j=0
Fig. 1. Pixel pair of gray level co-occurrence matrix
Polymer flooding improves the driving pressure difference in the rock, so that the injection fluid can overcome the capillary resistance produced by small pores and enter
Analysis and Research of Artificial Intelligence Technology
739
into small pores for oil displacement. Its function mainly displays in three aspects: first, adsorption. Due to the large amount of polymer adsorbed on the pore wall, the water phase flow capacity is reduced, but the oil phase is not affected much. Under the same oil saturation, the relative permeability of oil phase is higher than that of water drive. Second, viscosity. Due to the viscoelasticity of polymer, the viscosity of water to residual oil is enhanced, and the residual oil flows again and is carried out by polymer solution. Third, increase the driving pressure difference. The polymer increases the driving pressure difference inside the rock, so that the injection fluid can overcome the large capillary resistance caused by the small pores and enter into the small pores for oil displacement.
3 Suitable Conditions for Polymer Flooding 3.1 Screening of Polymers In polymer flooding, the complexity of formation rock and fluid will affect the effect of polymer flooding. When applied in oilfield, the selection of polymer must be considered comprehensively from the aspects of oil displacement effect and economy, and match with reservoir properties. Therefore, the polymer applied in Oilfield should meet the following conditions: water solubility; non Newtonian characteristics and obvious viscosity increasing; good chemical stability; shear stability; anti adsorption; good transmissibility in porous media; wide source, The price is low, so that it can be widely used in the oilfield with low cost. There are few polymers that can meet the above requirements at the same time. In application, the appropriate polymer should be selected according to the reservoir conditions [3]. 3.2 Reservoir Geological Characteristics Suitable for Polymer Flooding Not all reservoirs are suitable for polymer flooding. Even if the reservoirs are suitable for polymer flooding, their production increases are quite different. According to the research of Daqing Oilfield for many years, the geological characteristics of reservoirs suitable for polymer flooding have four aspects. One is reservoir temperature. At high temperature, the stability of polyacrylamide is destroyed, and the polymer is easy to be degraded and further hydrolyzed, which greatly reduces its effect. The practice in Daqing Oilfield has proved that the temperature range of 45–70 °C is suitable. It’s water quality. High salinity of water, low viscosity of polymer solution and low residual resistance coefficient will reduce the recovery of polymer flooding. Numerical simulation shows that the best salinity of formation water is 1600–30000 mg/L, and the salinity of surface water is lower than 1200 mg/L, which is the viscosity of crude oil. The basic theory of polymer flooding is to reduce the water oil mobility ratio. If the viscosity of crude oil is too large or too small, it is not conducive to enhance oil recovery [4]. The research shows that the viscosity of crude oil in the range of 0–100 is more suitable for polymer flooding. Fourth, reservoir heterogeneity. Polymer flooding is suitable for heterogeneous sandstone reservoir with water flooding development. The permeability variation coefficient of reservoir should not be too large or too small, otherwise it is not conducive to polymer flooding effect. Generally, the reservoir variation coefficient is best between 0.6 and 0.8.
740
Y. Huo
4 Technical Scheme of Polymer Flooding 4.1 Polymer Flooding is Mainly Used in Water Drive Reservoir At this time, the oilfield has stepped into the stage of high water cut, the contradiction between layers is prominent, the production is difficult, and the production cost is increasing year by year. In addition, the polymer flooding technology is more complex than water flooding technology, the dynamic monitoring is difficult, and the investment and production costs are very high, The polymer flooding development plan must be scientific and comprehensive on the premise of fully understanding the reservoir conditions. Oilfield development is a long-term and continuous process, and its development effect is closely related to reservoir conditions, well pattern types and production methods. Therefore, in the process of polymer flooding scheme preparation, we should study the geological characteristics of reservoir and the development status of water flooding well network. Through studying the geological characteristics of the reservoir in the development block, the development status, connectivity, heterogeneity, physical properties and fluid properties of the reservoir are identified. Through analyzing the development history of the reservoir in the development block, we can further understand the development status of the block, which will be helpful for the analysis of the water flooding and the distribution of remaining oil in the reservoir. 4.2 Reservoir Description is the Key to Analyze and Understand Reservoir and Development Status First, describe the development of oil layers, understand the sedimentary environment of oil layers, count the core analysis data of representative coring wells, and analyze the heterogeneity and types of oil layers. Secondly, the sedimentary units should be divided reasonably, and the average physical parameters of the reservoir should be calculated by describing the distribution of sand bodies, analyzing the distribution of reservoir thickness and permeability. Thirdly, analyze the water flooded condition of oil layer, and make clear the plane and vertical water flooded condition and water flooded characteristics of the block. Fourthly, by analyzing the production status of oil and water wells in the Development Zone, we can recognize the production capacity and injection capacity of each well from a macro perspective, and provide guidance basis for the formulation of the plan. Polymer flooding technology programming is a targeted research process based on the full understanding of reservoir conditions and analysis of current production conditions, comprehensive consideration of various factors. Its contents include: the general situation of reservoir geological development, which refers to the general situation of Geology and the brief history of reservoir exploitation; reservoir description, which includes the development status of reservoir, the division of sedimentary units, the distribution of remaining oil, the water flooded status, etc.; the formation combination and well pattern deployment of polymer flooding; the production and production status of oil and water wells before polymer injection; the optimization of polymer injection parameters, It includes the optimization of relative molecular weight, dosage, solution
Analysis and Research of Artificial Intelligence Technology
741
concentration and injection speed of polymer; Determination of injection and production mode: including the choice of layered injection and step-by-step perforation, injection mode of polymer slug and injection mode of protective slug before and after polymer solution slug; Determination of implementation plan of polymer flooding; prediction of mining index; prediction of polymer economic benefits; implementation requirements of the scheme.
5 Analysis of AI Exnet Based on Convolutional Neural Network After wavelet transform, the wavelet coefficient graph is processed as input layer or sampling layer. According to the structure of convolution neural network, convolution layer can extract image features, each convolution layer contains multiple feature maps, and the weights and biases of neurons in each feature map are shared. Different feature maps have different weights and offsets, so different features need different weights and offsets, that is, different convolution kernels. A convolution operation is added to the training data, and a convolution operation is added to the training data. Maximum pooling takes the maximum value of the feature points in the neighborhood, which retains the maximum value of the pixels in the original image, that is, retains the most significant part of the original image, and the average pooling value takes the average value of the sum of the regional pixel values in the original image. The purpose of pooling is to get the edge shape of the object and reduce the number of parameters to be learned by the network, that is to further abstract the features macroscopically, because after pooling, the main statistical features are obtained. Compared with all the extracted features, they not only have lower dimensions, but also prevent over fitting. After the parameters of the network structure are set, the network is trained and tested. In the specific training process, the initial learning rate is set to 0.0001, the number of each type of image in the image library is about 20–30, the size of the training set subset used for each training iteration is 10, the subset is used to evaluate the gradient of the loss function and update the weight, the maximum number of cycles used for training is set to 10, and the random gradient descent momentum optimizer (SGDM) is used, in the training process, the accuracy is rising, Finally, it reached 93.9%, while the minimum batch loss value continued to decline.
6 Conclusion Polymer flooding has been widely used to enhance oil recovery, which has made an important contribution to the increase of oil production in China. This paper is a supplementary study of polymer flooding technology. The method of optimizing polymer enhanced oil recovery is becoming more and more important in oil exploitation. In the future production process, reasonable methods should be used to achieve the maximum benefit of production.
References 1. Song, W.: Application of artificial seismic wave in road and Bridge Engineering. Heilongjiang Sci. Technol. Inf. (36) (2016)
742
Y. Huo
2. Cao, Z., Xue, S., Wang, X., et al.: Selection of seismic wave and damping ratio in seismic analysis of spatial structures. Spat. Struct. 14(3), 3–8 (2008) 3. Zhou, D.: Dynamic response analysis of energy dissipation braced frame structures under seismic waves. Suzhou University of Science and Technology, Suzhou (2019) 4. Yang, P., Li, Y.M., Lai, M.: Selection control index of input seismic wave by structural time history analysis method. Chin. J. Civ. Eng. 33(6), 33–37 (2000)
Application Research of 3D Ink Jet Printing Technology for Special Ceramics Based on Alumina Ceramics Guozhi Lin(B) Department of Design and Art, Quanzhou Vocational Institute of Arts and Crafts, Quanzhou 362500, Fujian, China
Abstract. Ceramic inkjet printing technology is a new technology which applies computer aided manufacturing (CAM) to ceramic forming. Under the control of computer, three-dimensional ceramic body is produced by multi-layer printing and superposition. It has a good application prospect in the manufacture of complex monomer ceramics, ordered composition composites, solid oxide fuel cell and so on. This paper summarizes the application of ceramic ink jet printing technology in alumina ceramics. Keywords: 3D printing · Ink jet printing · Alumina ceramic
1 Introduction With the rapid development of computer technology, the technology of using computer aided direct processing to manufacture various complex shape products has made great progress. In the early 1990s, H. Marcus and others proposed a new idea of solid free form fabrication (SFF). In this method, the complex three-dimensional components are sliced and segmented by computer software directly using the CAD design results, and the executable pixel unit file is formed. Then, the ceramic powder to be formed is quickly formed into the actual pixel unit through the external device output by the computer, The three-dimensional components needed can be directly formed by the result of one unit superposition [1]. Ceramic inkjet printing technology may be applied to the manufacture of solid oxide batteries, multilayer microcircuits, ordered ceramic composites, and the manufacture of small volume and highly complex monolithic ceramic components.
2 Three Dimensional Ink Jet Ceramic Printing Technology 2.1 Concept of 3D Printing Technology In the past, 3D printer was often called “rapid prototyping machine”. Through the recognition of 3D software in the computer, STL (triangular grid format) conversion is carried © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 743–748, 2022. https://doi.org/10.1007/978-981-16-5857-0_94
744
G. Lin
out, and then the placement orientation and cutting path are determined by combining with the layer cutting software, and the layer cutting work and the construction of related supporting materials are carried out. Finally, the solid linear molding material is heated to a semi molten state by a nozzle, and then extruded out, and the supporting material is constructed from bottom to top, one layer at a time to form the final entity. Use CAD software to create items, and then transfer them to 3D printer through SD card or USB. After printing settings, the printer can print them out. The working principle of 3D printer is basically the same as that of traditional printer, which is composed of control components, mechanical components, print head, consumables and media. The printing principle is the same. 3D printer mainly designs a complete three-dimensional model on the computer before printing, and then prints it out. Its working steps are as follows: use CAD software to create objects, if you have ready-made models, such as animal models, people, or miniature buildings, etc. Then copy it to 3D printer through SD card or USB USB flash disk. After printing setting, the printer can print them out. The breakdown diagram of its work structure is as follows. The working principle of 3D printer is basically the same as that of traditional printer, which is composed of control components, mechanical components, print head, consumables and media. The printing principle is the same. 3D printer mainly designs a complete three-dimensional model on the computer before printing, and then prints it out. 2.2 Ink Jet Printing Technology Inkjet printing technology is developed from three-dimensional printing technology, combined with the principle of inkjet printer for text output. In three-dimensional printing technology, the printed medium is binder, while the ceramic powder itself is placed on the printing table in a loose state. Obviously, this method can only prepare porous ceramics. Ink jet printing is to use printer to deposit ceramic ink drop by drop in a certain space controlled by computer, and then complete ceramic molding. The University of Bruno has always been in the lead in the research of this technology. In 1995, blazdell et al. 101 first produced 10–110 layers of ZrO2 green bodies on different substrates with continuous inkjet printer. Because the ink volatility is not good, resulting in the upper load to the lower layer of the phenomenon, did not play the ideal shape. In 1996, Teng et al. Carried out a comprehensive optimization study on ink deposition and viscosity. In 1997, Teng et al. Used continuous inkjet printer to print clear English words. Because continuous inkjet printer requires ink to have enough conductivity, in order to meet the conductivity requirements, it is necessary to add enough conductive salt. The existence of conductive salt makes the suspension and dispersion of ceramic powder in ink worse, thus greatly reducing the content of ceramic powder in ink [2]. In 2000, the world’s first industrial ceramic decorative inkjet printer came out. The inkjet printer generally consists of nozzle system, nozzle cleaning system, ink supply system, conveying system, cleaning system, operation and control system, etc. The sprinkler system is composed of sprinkler, sprinkler matrix and sprinkler board; The ink supply system includes ink cartridge, filter, ink pump and pipeline; The operation and control system includes gray controller, display, router, chip interface and other driving electronic equipment, control software, etc. these systems are the key core technology of inkjet printer. The principle of inkjet printer is shown in Fig. 1.
Application Research of 3D Ink Jet Printing Technology
745
Fig. 1. Principle of inkjet printer
3 Alumina Ceramics Ceramics are materials with a long history. When people found and used fire, ceramic materials came into being. Since then, the manufacturing technology of ceramics has been progressing with the development of society. In the 20th century, in order to meet the rapid development of science and technology, a variety of new materials, including new ceramic materials - special ceramics, have been produced. Compared with traditional ceramics, special ceramics have great differences in raw materials, production technology and many other aspects, which completely break through and surpass the concept and category of traditional ceramics. Traditional ceramics are fired with clay as the main raw material, while special ceramics are made of synthetic ultra-fine and high-purity powder inorganic compound raw materials, refined by various advanced molding methods, modern firing technology and precision processing technology. They are knowledge and technology intensive products with high performance and high added value, It is widely used in the field of emerging technology and cutting-edge technology, which has attracted the attention of developed countries [3]. Alumina ceramic is one of the special ceramics, which can be divided into high purity type and common type according to the content of alumina. High purity alumina ceramics are ceramic materials with A120 content of more than 99.9%. Because of its high sintering temperature of 1650–1990c and transmission wavelength of 1–6 µm. In general, molten glass is made to replace platinum crucible, which can be used as sodium lamp tube because of its light transmittance and alkali metal corrosion resistance. It can be used as integrated circuit substrate and high frequency insulation material in electronic industry. Ordinary alumina ceramics are classified into 99 porcelain, 95 porcelain, 90
746
G. Lin
porcelain and 85 porcelain according to the content of A1203. Sometimes those with 80% or 75% A1203 content are also classified as ordinary alumina ceramics. The production process is shown in Fig. 2.
Fig. 2. Flow chart of alumina ceramic production process
4 Three Dimensional Ink Jet Printing Technology of Alumina Ceramics 4.1 Alumina Ceramic Ink Jet Printing Alumina ceramic ink-jet printing technology and traditional ceramics are based on the principle of ink-jet printer, material droplets are sprayed from the nozzle, and solidified layer by layer according to a certain path. The method is: firstly, according to the 3DCAD model obtained by 3D scanning (or design) of the entity, the model is divided into a series of units according to a certain method. Usually, the model is divided into a certain thickness of two-dimensional thin layer in the z-axis direction, and the injection command is generated by the program control. After layer by layer spray curing and stacking, the three-dimensional solid device is obtained. Ink jet printing technology has the following advantages: the forming speed is 6 times faster than other processes, the equipment is easy to operate, suitable for office environment, and can realize the forming of color and multiphase solid structure; Its disadvantage is: the need for specialized research and development for jet fluid (ink) [4]. Due to the poor plasticity of ceramic powder, the molding performance mainly depends on the adhesive. According to the different bonding methods of ceramic powder, the forming methods can be divided into the following categories: (1) Curing. The ceramic powder can be solidified by peroxide initiator such as ammonium persulfate and catalyst such as tetramethylethylenediamine; It can also use adhesives that can interact with water, such as gypsum polymer, water glass, etc.
Application Research of 3D Ink Jet Printing Technology
747
(2) Colloid molding. With colloidal silica as the main component, the ceramic powder was bonded to form. In this forming method, acid catalysis is needed, such as adding a small amount of citric acid in ceramic powder to initiate the curing reaction of adhesive. 4.2 Ceramic Ink The quality of ceramic ink-jet printing depends on the performance of ceramic ink. Ink performance should match the printer’s best printout. Due to the high density of ceramic powder, nano ceramic powder is easy to form aggregates, so ceramic ink is generally composed of ceramic powder, dispersant, binder, solvent and other auxiliary materials. The particle size of ceramic powder should be less than LM, the particle size distribution should be narrow, and there should be no strong agglomeration between the particles. The dispersant helps the ceramic powder to be evenly distributed in the solvent and ensures that the particles do not agglomerate before spraying. The ink with poor dispersibility blocks the nozzle of the printer due to the uneven dispersion of ceramic particles in the ink drops. Therefore, the reasonable selection and dosage of dispersant is very important. Dispersants are mainly water-soluble and oil soluble polymers, benzoic acid and its derivatives, polyacrylic acid and its copolymers. The bond is after the solvent volatilizes, guarantees that the ceramic body has enough bonding strength, which is convenient for the transfer operation of the body. The solvent is the carrier to transport ceramic particles from the printer to the substrate, while controlling the drying time. It should have enough volatility to ensure rapid drying and provide conditions for multilayer deposition. At the same time, it should also have low viscosity and compatibility with other components. The ceramic ink used for continuous jet printing needs to add a small amount of conductive salt to make the ink reach enough conductivity to ensure that the formed ink drops can be charged, and the path can be changed under the action of deflection electric field to print to the position designated by the computer.
5 Conclusion Ceramic inkjet printing technology is a new ceramic forming method. It is the product of the combination of modern computer technology and nano ceramic powder suspension slurry preparation technology. It has the advantages of easy automation, high speed and low cost (especially for small batch products), which is incomparable to the traditional ceramic preparation process. Ceramic inkjet printing technology is in its infancy, there are many technical problems to be solved. It is believed that people will pay more and more attention in the future and become another research hotspot in the field of materials.
References 1. Marcus, H.L., Beaman, J.J., Barlow, J.W., Bournell, D.L.: Am. Ceram. Soc. Bull. 69(6), 1030– 1031 (1990) 2. Zhou, Z., Wang, S., Wu, M.: Research progress of modern forming technology of ceramics. China Ceram. 43(12), 3–7 (2007)
748
G. Lin
3. Huo, M.: Ceramic nozzle, the core driving force of ink jet innovation, Taocheng daily, April 2014 4. Li, M.: Discussion on ceramic forming technology. Acta Silicate Sinica 29(5), 4692470 (2001)
Research and Practice of Multiphase Flow Logging Optimization and Imaging Algorithm Dawei Wang(B) Daqing Oilfield Limited Company, No. 7 Oil Production Company, Daqing 163517, Heilongjiang, China [email protected]
Abstract. At present, the multi-phase flow in oil wells is still measured by the traditional single point measurement method of local spatial average, which can not obtain the spatial distribution information of fluid medium. Therefore, it is very difficult to identify the flow pattern and calculate the phase content and phase velocity when determining the downhole flow profile. Imaging logging technology can real-time detect the fluid flow in the well, obtain the two-dimensional or three-dimensional distribution information of multiphase fluid, give the phase distribution profile through processing, realize the flow pattern identification and determine the phase content and phase velocity. Therefore, this technology is becoming a research hotspot in geophysical logging field at home and abroad. Based on the difference of conductivity and dielectric properties of oil, gas and water in oil well, and according to the characteristics of electromagnetic field in fluid in oil well, an optimized imaging method of multiphase flow logging is proposed. Keywords: Multiphase flow · Imaging logging · Electromagnetic wave
1 Introduction With the development of oil and gas fields, most onshore waterflooding oilfields in China have entered the production stage of medium and high water cut stage. During this period, the distribution of oil, gas and water in oil and gas reservoirs and oil wells will change greatly. On the one hand, waterflooding oil production has made oil wells enter the production stage of high water cut stage; On the other hand, the flowing pressure in the well generally decreases and the crude oil degassing occurs after the oil wells are changed from flowing to pumping; The measures of increasing production by gas injection with large capital and technology investment also produce gas in the well, which undoubtedly leads to the three-phase flow of oil, gas and water in the well. In order to master the dynamic changes of oil and gas reservoirs and oil wells, the oilfield dynamic monitoring system requires timely understanding of the production status (separated phase flow) of each oil and gas production layer, and accurately determining the oil well production profile (the variation law of separated phase flow along well depth). In © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 749–754, 2022. https://doi.org/10.1007/978-981-16-5857-0_95
750
D. Wang
order to reasonably adjust the development plan of oil and gas field, make the oil well in the best or normal production state, and finally achieve the purpose of enhancing oil recovery [1]. Due to the complex flow characteristics of multiphase flow system in oil well, it is very difficult to analyze and solve the problem of oil well production profile only from the flow mechanism. In order to meet the needs of reservoir dynamic monitoring, it is necessary to deeply study the multiphase flow system of oil wells, establish the system model and realize the production profile prediction. The measurement of multiphase flow parameters has become an indispensable and difficult part in the whole system. It is of great significance to study the advanced measurement technology of multiphase flow parameters for optimizing the production performance of oil wells and correctly guiding the development of oil fields.
2 The Principle of Electromagnetic Wave Imaging Logging The fluid flowing in the well is generally heterogeneous oil, gas and water. Electromagnetic wave imaging logging can measure and display the flow image by stimulating certain electromagnetic fields in the well, and using the difference of oil, gas and water electrical properties on the flow section. The essence of imaging measurement is to use a physical realizable system to complete Radon transformation and radon inverse transformation of the distribution of some characteristics of the measured object field. Radon transform is reflected in the projection measurement of different directions of the object field, which reflects the effect of some distribution parameters of the field characteristics on the change of projection data; Radon inverse transformation is to determine the distribution parameters of the property of the material field by using projection data. The electromagnetic wave imaging measurement of multiphase flow can accurately reflect the electrical parameters and distribution of the fluid in the flow section by projecting the electromagnetic field in the fluid in the oil well. The probe consists of N equal spacing and ring arranged electrodes, which are placed in the measured fluid at a given depth in the well. The excitation signals of voltage balance and frequency determination are supplied to one of the electrodes, and a certain structure of electromagnetic field will be formed in the well fluid; At this time, the other N − 1 electrodes can receive the response signals under the action of fluid medium in different directions as measuring electrodes. According to the projection measurement in different directions of the flow section, n (n − 1)/2 independent measurement data will be obtained. By processing and inversion of these data, the electrical parameters of the medium in each part of the measured flow section can be obtained; Then, the computer transforms it into image pixels according to specific algorithm, and then the flow section image can be reconstructed and displayed. Electromagnetic wave imaging logging needs a solid physical basis. Obviously, it involves two basic problems: one is the propagation characteristics of electromagnetic wave in oil well, the other is the electromagnetic property of fluid medium in oil well [2].
Research and Practice of Multiphase Flow Logging Optimization
751
2.1 Electromagnetic Wave Propagation Characteristics in Oil Wells The propagation characteristics of electromagnetic wave in the fluid in oil well are mainly affected by casing and fluid medium besides the frequency domain of electromagnetic wave itself. The casing in the well is a round steel pipe, which will restrict and guide the propagation of electromagnetic wave. The guided wave system composed of casing is equivalent to circular waveguide, which can transmit transverse wave (TE wave) and transverse magnetic wave (TM wave). The electric properties of the fluid medium in the well will determine the propagation constant of the guided wave, and affect the distribution state and the wave propagation characteristics of the electromagnetic field in the axial and transverse direction of the well. The following is discussed by a simple field analysis. Because it belongs to theoretical analysis, the physical quantity annotation in the formula does not list the units of measurement. The distribution of fluid medium in oil well is generally non-uniform, but its electromagnetic field can be regarded as isotropic. The electromagnetic field is vector field. For inhomogeneous isotropic medium, it can be derived from Maxwell equation. ∇ • μ−1 ∇ • E − ω2 εE = jωJ − ∇ • μ−1 M
(1)
∇ • ε−1 H − ω2 εH = jωM − ∇ • ε−1 J
(2)
Where: μ is the permeability of the medium; ε is the dielectric constant of the medium; w is the angular frequency of electromagnetic wave; J is the current density; M is the imaginary magnetic flux density; E is the electric field intensity; H is the intensity of the magnetic field. 2.2 Electrical Properties of Fluid Medium in Oil Well The electrical properties of mixed fluids in oil wells are not only related to the electrical parameters of oil, gas and water phases, but also related to the flow velocity, phase content, distribution and electromagnetic field measurement. In theory, the electrical characteristics of mixed fluid are related to the electromagnetic field of inhomogeneous medium. In the 19th century, Maxwell not only established a complete electromagnetic field theory, but also devoted himself to the study of dielectric properties of inhomogeneous structure of matter. As a kind of non-uniform distribution system, the electrical characteristics of mixed fluid are widely used in people’s life and social production. Many people have done a lot of theoretical and experimental research after Maxwell, especially with the help of modern advanced technology to carry out experimental measurement. They not only have a deeper understanding of the electrical characteristics of mixed fluid, but also put forward some theoretical models, It promotes the development of electromagnetic measurement method and its application [3]. Generally speaking, the electrical property of mixed fluid is a physical property when it is used as a medium in electromagnetic field. The electrical properties of the medium are characterized by the combination relation in Maxwell equations D = εE + ξ H B = μH + ξ E
(3)
752
D. Wang
Among them, ε, μ and ξ Both are tensors, and the medium is called Double anisotropic medium.
3 Physical and Mathematical Models of Multiphase Flow Electromagnetic Wave Imaging Logging As shown in Fig. 1, the multi-phase flow electromagnetic wave imaging logging probe is a composite electrode array with three layers of electrodes, each layer arranged 16 electrodes at the upper and equal angles of the circle, the middle layer as the main electrode layer, and the upper and lower layers are shielded electrode layers, The main electrode and the shield electrode are fed with electromagnetic wave signals of the same phase and amplitude during the measurement. In the measurement, one electrode in the middle layer is selected as the emission electrode, the upper and lower electrodes are used as the shielding electrode, and the left and right electrodes are used as the focus electrode; Then select the remaining electrodes as the measuring electrodes in turn, and all the four electrodes at the upper and lower sides are grounded. Each measuring period has 16 * (16 – 5)/2 = 88 measurement combinations [4]. As shown in Fig. 1.
Fig. 1. Schematic diagram of electrode array of imaging logging probe
The diameter of oil well pipeline is usually about 100 mm. When the frequency of electromagnetic wave is below 3 MH, it can be proved that the electromagnetic field in the measuring area with a diameter of 100 mm is near field, which belongs to time-varying electromagnetic field under quasi-static condition E(r) = −∇ϕ
(4)
Where E (r) is the electric field intensity, r is the vector diameter, φ is the potential, and ∇ is the Laplace operator ∇σ ∗ (r)∇ϕ = 0
(5)
Research and Practice of Multiphase Flow Logging Optimization
753
Where σ ∗ = σ + iωε is the equivalent complex conductivity.
4 Finite Element Calculation Method of Electromagnetic Field Potential Distribution Because of the similarity of ZX and XY plane solution problems, this paper only takes XY plane problem as an example to illustrate the calculation method. The partial differential equations are transformed into variational equations, and the equivalent variation of XY plane definite solution problem is 1 ⎧ ⎨ D : J (ϕ) = 2 σ ∗ (6) Γ1 : ϕ = 1 ⎩ Γ3 : ϕ = 0 The solution domain of triangular elements is shown in Fig. 2. The XY plane starts from the center of the circle (Fig. 2(a)) and is divided one by one counterclockwise from the inside out; The rectangular area of ZX plane (Fig. 2(b)) starts from the lower left corner of the rectangle, and is divided one by one from left to right and from bottom to top. The division accuracy is arbitrarily set. The number at the triangle node is the node number, and the number in the triangle unit is the unit number.
Fig. 2. Struts frame structure
Division of triangular elements in XY plane circular domain (a) and rectangular regions in ZX plane (b). According to the division of elements, the quadratic functional can be expressed as the sum of the energy integrals throughout all elements, that is J (ϕ) =
θ e=1
Je [ϕ]
(7)
754
D. Wang
5 Conclusion The dielectric constants of oil, gas and water are tens of times different, and the conductivity is more than ten orders of magnitude different. The electrical properties of the mixed fluid in the well are not only related to the electrical parameters of each phase fluid, but also related to the fluid flow velocity, the content and distribution of each phase, as well as the measurement of electromagnetic field. Multiphase flow electromagnetic wave imaging logging is based on the characteristics of electromagnetic field in oil well and the electrical difference of oil, gas and water to distinguish the fluid in the well, which has a solid physical foundation. It is necessary to select the working mode and frequency of electromagnetic wave reasonably to measure and display the flow section of fluid in the well correctly.
References 1. Wu, X.: Study on multiphase pipe flow electromagnetic wave imaging logging method. Acta Geophysica Sinica 42(4) (1909) 2. Gu, C.: Application of multiphase flow detection technology in petroleum industry. J. Univ. Pet. (Nat. Sci. Ed.) 23(6), 110–111 (1999) 3. Zhao, L.: Study on electromagnetic wave flow imaging logging method. Beijing University of Petroleum (2002) 4. Wu, X., Jing, Y., Wu, S.: Multiphase pipe flow electromagnetic imaging logging method. Acta Geophysica Sinica 42(4), 557–563 (1999)
Research on the Statistical Method of Massive Data by Analyzing the Mathematical Model of Feature Combination of Point Data Yueyao Wu(B) Xi’an Jiaotong University, Xi’an 721000, China
Abstract. With the rapid development of science and technology in China, data mining technology has also been greatly improved. The most basic method of data mining technology is statistical method, and with the improvement of statistical technology, many new data mining technologies have emerged. Therefore, in order to provide valuable suggestions and practical research experience for data mining researchers and promote the progress and development of data mining technology, it is necessary to conduct in-depth research on the application of statistical technology in data mining. This paper introduces the significance and current situation of data research, and focuses on several typical statistical methods and techniques involved in data mining and their practical applications. Keywords: Data mining · Actual research · Data mining
1 Introduction In the Internet environment, massive data includes various formats, including text, audio, video, digital, etc. traditional massive data statistics are aimed at single type or single source to analyze data, which can not effectively solve the complex problem of multi data structure and multi data sources, and the analysis results obtained by statistics are also very limited, which can not effectively solve specific problems. Massive data statistics is to analyze various kinds of data generated in the current Internet environment every day, including the classification, integration, calculation and analysis of massive data, and providing decision-making [1]. Compared with the traditional statistical model, the point cloud data feature combination mathematical model can improve the modeling accuracy in the process of modeling, and the time efficiency and memory occupied by the model are also far better than the traditional model. Therefore, this paper proposes a statistical method of massive data based on the mathematical model of point cloud data feature combination.
2 Research Methods and Statistical Indicators In this paper, the mathematical model of point cloud data feature combination has been studied and applied to airborne lidar. In order to improve the statistical efficiency, the © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 755–759, 2022. https://doi.org/10.1007/978-981-16-5857-0_96
756
Y. Wu
mathematical model of point cloud data feature combination is applied to massive data statistics. 2.1 Building a Cube Firstly, according to different sources and types of massive data, according to its uncertainty characteristics, the joint distribution function and random distribution function of point cloud data feature combination mathematical model are selected to obtain the multidimensional data set of massive data. And the number of points in the cloud represents the number of points in the cloud model. In this paper, the integrated massive data is divided into six parts: behavior log, user dimension, time dimension, behavior type dimension, behavior result dimension and action object dimension. Through the relationship between point clouds, the efficiency of the whole algorithm is improved [2]. 2.2 Establishment of Statistical Indicators The massive data in the Internet environment is selected as the processing target. The data set contains the user’s own information, user behavior logs and relevant data from various sources. According to different data sets, the statistical index and calculation formula are set. Data residence time P1 : the interval between the time when a user transmits data to another user under the Internet environment, and it is counted as the effective residence time of the first data transmission behavior. This statistical index can effectively reflect the user’s adhesion to the data [3]. Page behavior times P2 : page behavior mainly focuses on data retrieval, display and browsing, and the actual scope is far greater than the browsing behavior category. This statistical index can judge the adaptability of data through the browsing behavior of users. Session number P3 : refers to the number of sessions in the process of massive data statistics. User visits P4 : refers to the number of different users in the process of massive data statistics. The calculation formulas of four different statistical indexes are as follows: Average data residence time: P1 =
Ssum Scount
(1)
For the special case that the user has no follow-up behavior in the statistical process, the average page stay time should be used to replace the total length of the user’s stay time: Ptotal = P2 × tavg
(2)
There may be large outliers in the data set of massive data to be counted, which may lead to the risk of privacy information leakage in the data, so as to avoid the problem of data information leakage or increase error in outlier points. Therefore, based on the
Research on the Statistical Method of Massive Data
757
feature combination mathematical model of point cloud data, the scattered point cloud data are fused according to its characteristics to obtain the center points of different data sets, Complete the data center point acquisition.
3 Statistics and Analysis of Massive Data Select the transaction data in a website as the statistical object of massive data, including transaction records, transaction amount and other data information. The data of outliers and approximate groups are fused with the help of the point cloud data feature combination mathematical model. Firstly, the original group is divided. For the massive data, the best grouping fusion effect can be achieved by quickly aggregating similar groups. Then, the data set is protected by differential privacy, which further strengthens the privacy of mass data statistics, and ensures the availability of statistical results after data statistics. Before the statistics of massive data, it is necessary to define the period of application data statistics, and then determine the basic data cycle of the overall composition of the data. After the indicators are determined, the overall data capacity of the application index detailed list is estimated, and the service subject of the application index statistical packet is defined. Then, the packet environment factor is given based on the comprehensive consideration of the environmental factors of the indicator group, the database server and the network switch capacity. Then, the outlier points in the data set are fused to reduce the leakage of data information privacy. Finally, the total statistical results of application indicators are input into the corresponding database for storage. As can be seen from the statistical results in Fig. 1, the time consumed by the statistics of massive data in the experimental group is far less than that in the control group. Therefore, through the comparison chart, we can see that the massive data statistics method based on the point cloud data feature combination mathematical model can better solve the operation efficiency problem of massive data statistics, and the method in this paper can effectively statistics and analyze the massive data, and ensure the privacy and security of the data as well as the operation efficiency.
4 Research on the Application of Data Mining Method The main purpose of data mining is to explore how to use it more flexibly in various practical fields, which is also the main concern of data users and researchers. Research on the application of data mining in retail trade. Association analysis is the earliest research method used in the retail trade. The main principle is to use the association rules between products in the sales transaction database for data mining and analysis. Association rules are the knowledge of rules mined by different algorithms. Using association rules for data mining must satisfy enough dimension, large quantity and conditional independence [4]. “Shopping basket analysis” in association analysis marketing method is a typical example of association rules application. Research on the practical application of data mining in finance, insurance and communication industry. In the aspect of business application, the data mining process is mainly divided into three steps: first, data collection; secondly, using data mining technology to extract valuable knowledge; finally, using the extracted knowledge to assist
758
Y. Wu
Fig. 1. Comparison of experimental results
the corresponding data users to make decisions. In recent years, data mining technology is widely used in insurance industry, finance and communication industry. In the banking and other financial departments, it is mainly used in the classification management of bank customer relationship and the credit rating of bank credit card customers; in the insurance industry, it mainly realizes the customer classification evaluation and risk prediction through the neural network model; in the communication industry, it mainly uses the decision tree to analyze the customer characteristics and consumption behavior, Through the accurate analysis of user behavior, the trend can be obtained, which can guide operators to make effective decisions, reduce the communication industry operators and reduce operating costs. (1) Data division. In a distributed environment, data storage needs to span multiple storage units. How to divide data is the key problem that affects scalability, load balancing and system performance. In order to provide low latency system response and overcome the bottleneck of system performance, the system must distribute user requests reasonably when they arrive. The existing massive data management system mainly adopts hash mapping and sequential splitting. In Internet applications, data is usually organized in the form of key / value pairs to adapt to the diversity of data and the flexibility of processing. Hash mapping is to hash according to the key value of the data record, and map the data record to the corresponding storage unit according to the hash value. However, the performance benefits of this data partition method depend on the advantages and disadvantages of hash algorithm. Sequential splitting is a progressive way of data partition. After the size of the data table reaches the threshold, the data table will be split, and the split data will be assigned to different nodes to continue to provide services. In this way, the new data will be automatically inserted into the table according to the key value. (2) For massive data processing, it is necessary to index large tables. Creating index can greatly improve the performance of the system. First, by creating a unique index, the uniqueness of each row of data in the database table can be guaranteed. Second,
Research on the Statistical Method of Massive Data
759
it can greatly speed up the speed of data retrieval, which is also the main reason for creating an index. Third, it can speed up the connection between tables, especially in the realization of data referential integrity. Fourth, when using grouping and sorting clauses for data retrieval, it can also significantly reduce the time of grouping and sorting in the query. Fifthly, through the use of index, we can use the optimized hider in the process of query to improve the performance of the system. (3) Establish cache mechanism. When the amount of data increases, the general processing tools should consider the cache problem. Cache is between the application and the physical data source, and its function is to reduce the frequency of application access to the physical data source, so as to improve the performance of the application. The data in the cache is the copy of the data in the physical data source. The application reads and writes data from the cache at runtime, and the data in the cache and the physical data source will be synchronized at a specific time or event. The setting of cache size is also related to the success or failure of data processing. The cache medium is generally memory, so the read and write speed is very fast. However, if the amount of data stored in the cache is very large, the hard disk will also be used as the cache medium. The implementation of cache should not only consider the storage media, but also consider the concurrent access of management cache and the life cycle of cache data..
5 Conclusion In order to further improve the usability and efficiency of the statistical results of massive data, this paper proposes a statistical method of massive data based on the mathematical model of point cloud data feature combination, which solves the problems of low computational efficiency and privacy leakage in the process of traditional massive data statistics. At the same time, the comparative experiments show that the model can better meet the needs of data security, ensure the data statistical results have good usability, and have higher application value.
References 1. Li, J.: Analysis of statistical development direction under the background of big data. Chin. Foreign Entrep. (5), 110 (2020) 2. Niu, L.: Application of statistical analysis technology in data mining. Guangxi J. Guangxi Normal Univ. (Philos. Soc. Sci.) (2002) 3. Dong, C.: Research on several typical data mining methods and their application. Shandong University, Shandong (2010) 4. Zhou, X., Bai, Y., Sun, Y., Sun, E.C.: Network traffic analysis system based on data reduction and attribute oriented induction. J. Chin. Acad. Electron. Sci. (2009)
In Graphic Design - Design and Thinking from Plane to Screen JieLan Zhou(B) Nanchang Institute of Technology, Nanchang 330044, Jiangxi, China [email protected]
Abstract. In the era of digital new media, e-books appear in people’s life with a new look, and the form of reading books is also changing from a single paper media to a diversified one. E-books are totally different from traditional publications based on paper. Relying on high-tech, e-books not only show the progress of technology, but also establish a new communication environment to a great extent, change readers’ reading habits and cultivate new media acceptance forms. Therefore, the changes it brings are comprehensive and multi-level. This paper analyzes the design changes in the process of the continuous development from printed books to e-books, as well as the transformation of designers’ thinking, which is of great importance. At the same time, people’s new demand for reading in the information society urges designers to find new innovation points. Keywords: New media · Book binding design · E-book
1 Introduction A good book should not only move the readers from the content, but also pay attention to the form. A well-designed book should form a perfect artistic whole from the form to the content. A pleasant book can provide readers with a pleasant reading atmosphere. When it comes to the book design of e-books, we should first understand the development process from printed books to e-books. Printing and plate making originated from papermaking and movable type printing in China. The printing machine invented by Gutenberg in the 15th century with an oil press is the real beginning of modern analog printing. From typesetting to photographic plate making, with the emergence of PC computer, scanner, color printer and other equipment, traditional analog printing has also entered the digital era. Computer aided prepress design, EPS hair, digital proofing, CTP direct plate making and other technologies are advancing by leaps and bounds. Today, the further development of computer and network technology has brought great changes and impact to the printing industry. Digital printing is aggressive, and e-reading, paperless publishing and e-books are in the ascendant[1].
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 760–766, 2022. https://doi.org/10.1007/978-981-16-5857-0_97
In Graphic Design - Design and Thinking from Plane to Screen
761
2 New Media and E-book Design 2.1 New Media and Its Characteristics With the development of science and technology, media forms are also developing. With the rapid development of the Internet, the new media era has come and is moving from the edge to the mainstream. The new media can be defined as “interactive digital composite media”. The new media era of network has made the world a global village. The era of network resource sharing has changed the way of communication and exchange of information and the mode of thinking of human beings. In the era of reading screen, in recent years, the reading products represented by computer terminals, tablet computers, e-book readers and mobile phones have gradually changed people’s reading habits and the way to acquire knowledge. Internet and mobile phone have become an important reading channel. Learning and living with the Internet and mobile phone is a trend and will increase gradually. People read books, read newspapers and periodicals on the Internet. As long as they do it, the Internet can present the knowledge you need comprehensively. It not only improves their knowledge reserve, but also makes a real case study. 2.2 Paperless Reading and E-book The new media on the Internet is a kind of composite media, which dispels the boundary between traditional media (TV, radio, newspaper, magazine, books and communication), and integrates the traditional media form into the network platform. The most common and familiar paperless reading methods are presented in the form of office software, such as Microsoft office series software, Adobe PDF reader, etc. Web browser is the most interactive and innovative paperless reading method, which can easily browse the web page through computers and various mobile devices, and can almost omnipotent through the network. Like web pages, e-magazines are also innovative and paperless reading methods. The design style of e-magazine seems to be the electronic form of printing magazine. In fact, the e-magazine greatly enhances the interactive reading, including music, hyperlinks, buttons and some dynamic effects of pages. Therefore, it is better to understand that e-magazine is the grafting of web pages and printing magazines. Today, we need to pay attention to the new media form which is developing in the digital media era. The new media is changing with the development of media. E-books are new media relative to printed books. Nowadays, e-books are bringing about changes in New Reading Styles in the Internet age, which is likely to replace the traditional reading methods. E-book, also known as e-book, is a kind of book that can be read by personal computer, notebook computer, tablet computer, smartphone, or any reader that can store digital reading data in large capacity, and is an alternative to traditional paper book. Often hear new terms such as network animation, network video, mobile animation, mobile games, and so on. Of course, you can read and read newspapers on computers and mobile phones. E-book in narrow sense refers to books that can be spread on cable and wireless networks. It includes e-readers in the form of software, portable handheld electronic devices designed for reading books, and liquid crystal display technology
762
J. Zhou
called electronic paper, which satisfies the readers’ pursuit of interactive experience of paper reading and has a more paper-based feeling, Electronic reading products that offer a similar feeling of paper reading can make people comfortable reading books for a long time. 2.3 E-books and Their Characteristics The most important feature of e-book is multi-media interaction, and the interactive meaning of e-books is very wide. In the field of design, e-book interaction can be called interactive design, also known as interactive design “Interactivity is one of the most prominent advantages of e-books. It makes the reader’s reading activities more relaxed, flexible and personalized. Interactive design provides a variety of different ways to build a bridge between communicators and receivers, making feedback more timely and accurate. The adjustment and perfection of the readers’ needs is faster than the traditional paper media. For example, for the readers to establish personal bookshelves, readers can collect their favorite books catalogue and links through the space of individual bookshelves; You can join the book friends forum, participate in the book friends comments, etc. In conclusion, “interaction” is the process of making the objects interact and make each other change positively. Music, as an important means of expression of electronic books, has been adopted by many designers, and it plays an important role in realizing and improving the aesthetic effect of books. The arrangement of music with different backgrounds can bring readers different emotional or cognitive experience, which is a unique charm of e-books [2]. Because the RGB color display system is adopted in e-books, no matter how beautiful the printing is, the visual impact of e-book to readers can not be achieved. In addition, the combination of audio-visual language and interface provide readers with more selectivity, making e-books more humanized and personalized, further meeting the needs of readers, making e-books more deeply into people’s real life, and becoming one of the media with the highest frequency of use. One of the important reasons why these ereaders can attract a large number of readers is their high pseudo true paper visual effect. Through the electronic ink technology, designers have changed the position of many loyal readers of paper printing materials, thus further occupying the market.
3 The Changes of Contemporary E-books Compared with Traditional Books 3.1 Change of Book Form As shown in Fig. 1, the traditional book form from the beginning of the “Jiance” to “scroll”, “album”, and then to the modern designers for the modern book carving design concept, the book form is presented in a real space in a concrete form. However, the form of e-book is presented to readers through electronic display interface in a virtual threedimensional space. E-books can not only imitate the familiar traditional book forms. At the same time, with the innovation of electronic technology and the development of multimedia technology, with the help of electronic virtual technology, the form of
In Graphic Design - Design and Thinking from Plane to Screen
763
e-books should get rid of the limitations of materials and production technology, and develop in a more diversified direction, giving readers a new sensory enjoyment in the virtual space. Can the encyclopedia be virtualized as a skyscraper? Or a modern museum? I think all of these are possible, not to mention the profound significance of their existence. In terms of form alone, it is bound to create a number of new ideas [3].
Fig. 1. The development of book form
3.2 Change of Reading Carrier As a new knowledge carrier, e-book is significantly different from traditional printed books. When we talk about the changes of information carriers such as words since ancient times, we can’t deny that it evolved with the gradual improvement of human’s understanding of nature and self. From the initial use of natural materials such as tortoise shells, animal bones, bark and so on to the invention of paper, to the adoption of digital media such as computers and networks, the carrier used by people to record and read has undergone a subversive change. Although paper technology has experienced many technological inventions and innovations since it was invented, its form is still limited to the two-dimensional space determined by the width and height, just like the content presented. Until the appearance of computer in the 1880s, the adoption of Hypertext Technology is another innovation of the text carrier. The form of the carrier has been transformed into a picture composed of electronic information and displayed on the electronic screen. Its form has completely abandoned a specific bearing entity, so that reading towards three-dimensional space. At the same time, as shown in Fig. 2, most of the current e-books have added a time axis, and the effective use of the concept of time also further promotes reading to move forward to a four-dimensional space. 3.3 Change of Reading Habits With the development of e-book text form from traditional text to hypertext, a series of changes have taken place in reading habits. If the traditional text and hypertext are compared from the text content, form of expression and reading habits, it is not difficult to find the difference between them. The traditional text is printed as the media, and the readers’ reading moves forward with the fixed text content. Although some people can “read ten lines at a glance”, their vision and scope are always impossible to jump out of the inherent page limited by the traditional text. The application of multimedia
764
J. Zhou
Fig. 2. The timeline of e-books
technology shows readers a three-dimensional reading space. Readers can choose the order and way of reading according to their own interests, reorganize information, and change from passive reception to active participation. This way of reading is not only to break the tradition, but also to determine a series of new reading habits, which will develop towards humanization, diversification and personalization [4].
4 E-book and Its Design Features 4.1 E-book and Traditional Book Binding Design Book art is the art of telling people about their vision. About 70% of the information that the human brain gets from the outside world comes from vision. Traditional book binding design focuses on visual communication design. Although e-book design is multimedia design, the biggest similarity between e-book and traditional books is visual reading, but today’s way of reading has become screen reading. Therefore, the traditional visual communication design concepts and methods are still applicable to the design of e-books, but unconsciously, the original graphic design has become the current “screen design”. Ebook design can draw inspiration and reference from the traditional book binding design. In fact, the current e-book design mainly digitizes the existing traditional literature, that is, makes the traditional paper books into e-books. Therefore, many e-book designs are similar to the original paper books in form and style to adapt to the reading habits of readers, Here, many traditional methods and characteristics of book binding and design of paper books are still effective, but they are declining, because even the digitization of traditional literature, it is necessary to redesign the binding and design of paper books. The concept of book binding develops with the development of the times. In order to design excellent e-book works, in addition to the visual effects required in book design, we should also consider the multimedia features of e-book design.
In Graphic Design - Design and Thinking from Plane to Screen
765
4.2 Change of Design Thinking from Graphic Design to “Screen Design” McLuhan once pointed out: “mechanical media (especially printed matter with linear structure) is the extension of individual organs of human body, which makes people specialize in one subject and pay more attention to vision. In this kind of media, individuals use the method of analysis and cutting to understand the world; The electronic media is the extension of the central nervous system, which integrates people into a unified body. The Gutenberg era of focusing only on vision, mechanization and specialization is gone forever. People who only focus on logical thinking and linear thinking can no longer work. People in the electronic age should be people who integrate perception, who think as a whole, and who grasp the world as a whole.” Today, the way of reading paperless books begins to enter the field of vision of the general public. The emergence of paperless e-books is a challenge to designers, and the concept of traditional book binding will also have a new change. The design of e-book needs to endow sound, image, animation and enhance the participation and interaction of reading. It also needs to show a new way and new visual experience in line with the e-reader, so as to meet the physiological and emotional needs of readers in the digital era. This change of design mode also puts forward new ideas and requirements, how to express the new forms and characteristics of e-book expression, in the layout design, color design, text design, let readers feel the different reading experience of e-book. As an important means of expression of e-books, music can be switched on and off at any time and the design of background music selection is also an extension of e-book interactive experience design. The page design of e-book is multimedia design, which is different from traditional printed books. E-book can have graphics, dynamic graphics, images, 3D animation, virtual reality and other new forms of digital art, which increases the richness and interest of e-book pictures.
5 Conclusion Although e-book shows unique advantages in many aspects, its design is still the inheritance of traditional book design to a large extent. At the same time, e-book is also a change of traditional books. E-books have many of the same characteristics as traditional paper books, and the book binding design methods and characteristics of many traditional paper books can still be used for reference in the design of e-books. In the e-book form design process, there is no need to completely follow the traditional design ideas. E-books provide readers with a brand new reading experience by doing a good job in the interactive experience of informatization. Therefore, the “binding design” of e-books needs a new concept and method of visual art design “E-book has developed into an important carrier of knowledge and information dissemination, the main form of new publications.” The future e-book will be any form that we can’t imagine or can imagine. Acknowledgements. Jiangxi Provincial Education Science 14th Five-Year Plan 2021 Annual Project: Visual Communication Design under “New Arts” (Project No.: 21YB237).
766
J. Zhou
References 1. Li, D.: Digital Reading: Information and Skills You Must Know, p. 10. National Library Press, Beijing (2010) 2. Wang, H.: Pay attention to the development of e-books in China, no. 4, pp. 10–11. China Electronics and Network Publishing (2003) 3. Wang, S.: Book design from the past to the present. Artist (5), 63–67 (2008) 4. Wan, F., Mou, Y.: Design of Electronic Magazine. Yunnan University Press, Yunnan (2008)
Design of Urban Rail Transit Service Network Platform Based on Genetic Algorithm Caifeng Yu(B) Shandong Transport Vocational College, Weifang 261206, China
Abstract. In view of the problems existing in the current research and development of online course platform, such as the lack of humanization and personalization, the lack of high relevance of course related modules, and the lack of integration of related theories, this paper takes the urban rail transit service English as an example, studies the design of online course platform based on genetic algorithm, and deeply discusses the integration framework of knowledge management and online course. The prototype system well reflects the knowledge management of system level and individual level. Keywords: Genetic algorithm · Service English · Online course platform
1 Introduction With the development of China’s economy and the globalization of the world economy, the increasingly extensive field of international business has put forward new requirements on the quantity and quality of English talents for international urban rail transit service. However, traditional classroom teaching emphasizes theory, and its operability is not strong, which is not conducive to the cultivation of students’ practical ability. How to design a high-quality English Network Course for urban rail transit passenger service is the concern of the majority of English teaching staff. With the emergence of online course of urban rail transit service English, students are no longer limited to the knowledge in books, but can consult the latest business trends at home and abroad with the help of the Internet. Network course construction is an important part of network education, which occupies a core position. Nowadays, the construction of online courses in the world has made remarkable achievements. MIT’s OCW project (MTT’s opencourse ware), which was launched in 2001, plans to put almost all the courses online for free study by people all over the world in ten years. China’s Ministry of education has built 1500 national quality courses in five years (2003–2007). By using modern means of educational information technology, the relevant contents of quality courses are online and open for free, so as to realize the sharing of high-quality teaching resources. Since 2007, we will continue to promote the construction of national excellent courses, select about 3000 courses, carry out key political reform and construction, and comprehensively drive the level of curriculum construction and teaching quality of colleges and universities in China. Based © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 767–771, 2022. https://doi.org/10.1007/978-981-16-5857-0_98
768
C. Yu
on this background, it is of great theoretical significance and practical value to continue to carry out the research and development of online courses in the school’s existing good network environment [1].
2 Genetic Algorithm 2.1 Principle of Genetic Algorithm As an important branch of evolutionary computation (EC), genetic algorithm (GA) has been paid more and more attention in recent years, and has been widely used in engineering. Different from the traditional search algorithm, genetic algorithm starts the search process from the initial solution randomly generated by - group, which is called population. Genetic algorithm is mainly realized by crossover, mutation and selection. Crossover or mutation operations generate the next generation of chromosomes, known as offspring. The quality of chromosomes is measured by fitness [2]. The function that measures individual fitness is called fitness function. The definition of fitness function is generally related to solving specific problems. The steps of basic genetic algorithm are described in Fig. 1.
Fig. 1. Flow chart of basic genetic algorithm
2.2 Convergence Analysis of Genetic Algorithm The convergence of the algorithm can be defined as follows: if the population xt of the algorithm at time t satisfies: lim xt = x0 , x0 ∈ X
t→∞
(1)
Then, the convergence of the algorithm is called x 0 . for the convergence of genetic algorithm, Michalewicz proves the convergence theorem based on the principle of compression. Rudolph proved the convergence theorem based on Markov chain.
Design of Urban Rail Transit Service Network Platform Based on GA
769
3 Design Principles of English Network Course for Urban Rail Transit Service Curriculum is the sum of subject, teaching material and experience. Network course is a course based on network environment under the guidance of advanced education thought, teaching theory and learning theory. Its learning process has the basic characteristics of interaction, sharing, openness, cooperation and autonomy. As special purpose English (ESP), urban rail transit service English contains rich business culture,. The ultimate goal of urban rail transit service English teaching is to cultivate students’ cross-cultural business communication ability, and the online course of urban rail transit service English is developed to better achieve this goal and adapt to the network teaching environment. The following principles should be followed when designing the online course of urban rail transit passenger service English [3]: 3.1 Fully Reflect the Subjectivity of Students in the Learning Process As an integral part of urban rail transit service English teaching, the online course of urban rail transit service English is mainly studied by students rather than teachers who play a leading role in traditional classroom teaching. Therefore, in the process of designing the online course of urban rail transit passenger service English, we must adopt the teaching design mode of “learning oriented”, give full play to the students’ initiative, and the course design should reflect the students’ initiative. Students should have the ability to transfer and externalize the existing knowledge after learning the course. 3.2 Highlight the Practicality of Online Course of Urban Rail Transit Passenger Service English A series of practical courses are involved in the English Teaching of urban rail transit passenger service, such as the preparation of import and export documents, market research and analysis, international settlement, etc. From the dialectical relationship between direct experience and indirect experience, indirect experience comes from direct experience and is verified in direct experience. In the design of network course, it starts with the creation of problem situation, takes the presentation of teaching content in relevant situation as the process, and ends with solving practical problems by indirect experience. It can make learners feel the fun of learning, make knowledge melt into life, and make students feel useful. Only in this way can students change their knowledge from passive to active and realize their own construction. 3.3 Highlight the Cross-Cultural Nature of the Online Course of Urban Rail Passenger Service English The ultimate goal of foreign language education is to cultivate students’ cross-cultural communication ability. In a sense, language education is cultural education. As special purpose English, urban rail transit service English contains rich business culture, including traditional business customs and business etiquette habits of different countries. The ultimate goal of urban rail transit service English teaching is to cultivate students’ cross-cultural communication ability in the business environment.
770
C. Yu
4 Design of Network Course Platform Based on Genetic Algorithm 4.1 Architecture The network system platform is divided into two parts: foreground website and background management system. The foreground website uses the dynamic web page production technology to obtain the relevant content from the server database in real time and present it as a web page through the browser [4]. The maintenance of the front page content is realized by the background management system. Users can use the browser to operate the background management interface to manage the website content. Background management system mainly has the following modules, as shown in Fig. 2.
Fig. 2. Architecture diagram
The presentation layer provides an application interface for users. On the one hand, it presents and collects user information, on the other hand, it can process user information and interact with business logic layer. The business logic layer is responsible for receiving the request from the browser and passing the request to the data access layer, and sending the request processing result to the browser. The data layer is responsible for storing and managing data, which provides data services for the logic layer, such as user management, course management, course node management, article management, etc. Stored procedures and triggers can be used to ensure the integrity and consistency of data. 4.2 Course Platform (1) To provide learners with a convenient and efficient help system. Through the network navigation technology, it can provide instant help for learners’ learning, so that the learning content can be browsed and reviewed at any time; Through online tutoring, email, “shared whiteboard”, chat room and electronic forum, the cooperative learning among learners and the teaching communication and feedback between teachers and students can be realized. (2) The construction of a huge and rich information base, resource base and design of other teaching resource information. Online resources is a very important issue. The construction of resource database is indispensable for learners to search and inquire. The content of resource database should be rich and detailed. In order to improve
Design of Urban Rail Transit Service Network Platform Based on GA
771
learners’ interest in learning, attract learners’ attention and improve learning effect, the presentation of information should be situational and multi-media, and provide teaching content with pictures, audio and video. Network courses should be open enough to provide learners with a lot of relevant information, deepen students’ understanding of knowledge, strengthen students’ self-learning ability and independent thinking, and broaden students’ horizons. (3) Make full use of multimedia simulation technology to abstract and simulate the network course. While simulating the real world, the computer not only avoids the occurrence of dangerous or destructive things, but also shortens the time needed for long-lasting changes; At the same time, for complex problems, through the use of graphics, images, sound, video, animation and other forms of multimedia expression, the abstract problems are materialized and the theoretical problems are exemplified, so that the problems are simple and intuitive, the learning difficulty is reduced, and the learners have a strong sense of reality, It is convenient for learners to assimilate and adapt to new knowledge on the basis of existing knowledge and experience.
5 Conclusion Network course platform has become a shortcut for the construction of excellent courses in Colleges and universities. Based on the design mode of the network course platform of urban rail transit passenger service English, the website of urban rail transit passenger service English can be constructed simply, conveniently and quickly. The establishment of these professional network courses will greatly improve the teaching effect of professional courses and enrich the teaching means, To provide a more convenient way of information exchange for teaching and learning. Acknowledgements. On development of an online vocational course in accordance with typical tasks in workplace—Taking “Practical English for Metro Service” as an example No: BYGY201912 Foeign Language Education in Higher Education Project 2019 of Shandong Provincial Institute of Education Sciences.
References 1. Xu, C., Chen, G., Yuan, X.: The current situation and development trend of online curriculum development. China Distance Educ. 206(15), 39–42 (2003) 2. Li, M., Kou, J.: Basic Theory and Application of Genetic Algorithm. Science Press, Beijing (2002) 3. Yu, L.: Some suggestions on the design of network curriculum and its theoretical basis. Heihe Educ. (02), 40–41 (2005) 4. He, K.: Design and development of modern educational technology and high quality network courses. China Audio Vis. Educ. 209(06), 5–11 (2004)
Research and Implementation of Parallel Genetic Algorithm on a Ternary Optical Computer Hengzhen Cui, Junlan Pan, Dayou Hou(B) , and Xianchao Wang(B) Fuyang Normal University, Fuyang 236037, Anhui, China [email protected]
Abstract. Genetic algorithm is an algorithm widely used in combinatorial optimization, machine learning, signal processing and other fields. The traditional genetic algorithm has a long search time, slow convergence speed, and strong parallelism. This paper studies the actual scheme and realization method of parallel genetic algorithm. First, through the analysis of the traditional genetic algorithm, find out the parallel part, and carry on the parallel design to this. Finally, the correctness of the parallel scheme is verified through experiments. On the premise of ensuring accuracy, compared with traditional genetic algorithms, it can effectively improve the efficiency of problem solving and reduce the time spent in the calculation process. Keywords: Ternary optical computer · Parallel computing · Genetic algorithm
1 Introduction Genetic Algorithm (GA) is a kind of bionic algorithm proposed by Professor Holland in the late 1960s to early 1970s based on the laws of biological evolution. The algorithm is a group-oriented random search technology and method based on Darwin’s biological evolution theory and Mendel’s genetic variation theory, simulating the evolutionary process and mechanism of the biological world. It has the characteristics of natural parallelism. This algorithm can solve complex unstructured problems and has strong robustness. However, with the increase in the amount of data for people to study problems, simple genetic algorithms are increasingly unable to meet people’s needs for solving problems. People began to study the parallelism of genetic algorithms, and expected to improve the genetic algorithm to Realize the fastness and correctness of algorithm operation. The ternary optical computer (TOC) is named after its processor uses ternary optical signals to represent information and can perform all ternary logical operations [1]. The ternary optical computer has the characteristics of a large number of processor bits, the processor can be divided into multiple independent small parts, and the calculation function of the processor bits can be reconstructed [2]. These advantages make it’s computing power far greater than the current electronic processors. It can not only replace © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 772–779, 2022. https://doi.org/10.1007/978-981-16-5857-0_99
Research and Implementation of Parallel Genetic Algorithm
773
the work of electronic processors and GPUs, but also brings a lot of new data parallel computing models, which further enhances the use of computing to solve and research complexities. The ability to question. At present, TOC has a preliminary theoretical system and experimental platform [3–8]. Starting from the announcement of the TOC idea in 2003, after the derating design theory was proposed [9], TOC codec [10], MSD adder [11–14], and other implementations based on TOC applications, TOC currently has the conditions to implement parallel genetic algorithms. This article mainly uses the characteristics of TOC computer that can be reconstructed, have many data bits, and can be parallelized. By decomposing the parallel part of the genetic algorithm in the search process, a parallel genetic algorithm running on the TOC computer is designed, which improves the search speed and optimization performance of the genetic algorithm as a whole. The characteristics of TOC computer and the corresponding addition and multiplication operations are introduced, the parallelism of genetic algorithm is analyzed, the implementation plan of genetic algorithm on TOC computer is given, and the acceleration performance of genetic algorithm is analyzed.
2 Related Work 2.1 Genetic Algorithm The genetic algorithm is mainly composed of the following parts: (1) Selection refers to the use of a certain selection strategy to select a specified number of genetic operations in the parent population. The selection strategy is based on fitness, which can ensure the diversity of the population. For individual i, set its fitness as Fi . Assuming that the population size is n, the probability of the individual being selected can be expressed as n Pi = Fi / Fi i = 1, 2, . . . , n (1) i=1
It can be seen from formula (1) that individuals with high fitness will have a higher probability of being selected. Based on the selection probability, the rotation wheel method is used to realize the selection operation. (2) In the crossover operation, by continuously applying the crossover operator in the iterative process, the genes of good individuals can appear frequently in the population, and finally the entire population converges to an optimal solution. (3) Variation refers to randomly changing the value of a certain bit in an individual with a certain probability, that is, performing the inversion operation. 2.2 TOC Realizes the Basis of Genetic Algorithm Architecture of TOC. TOC uses the dark light state, the horizontally polarized light state and the vertically polarized light state to express information, and uses a liquid crystal device (LCD) to control the polarization direction of light, as shown in the figure. The feature of “many data bits” stems from the non-interference of light, and the feature of “bitwise reconfigurable” stems from the construction law of the ternary optical logic processor, that is, the derating design theory.
774
H. Cui et al.
MSD Adder. The MSD number system was proposed by Avzienis in 1961. The main content of the theory is that any real number A can be represented by an MSD number. The specific expression is ai 2i , (2) A= i
Among them is 1, 0 or u, MSD addition uses 4 kinds of logic operations T, W, T’ and W’. The truth table is shown in Table 1. The operation process is three steps: (1) Operate T and W on operands a and b at the same time, and append a 0 to the end of the result of T. (2) Perform T ’operation and W’ operation on the result of T and W respectively, and add a 0 to the result of T’ operation. (3) Perform T operation on each bit of T’ and W’ to get the final result.
Table 1. Transformation of T, W, T’ and W’ used by MSD adder a
b
T
W
T’
W’ M
−1 −1 −1 0
−1 0
−1 0
−1 1
0
−1 1
0
1
−1 0 −1
0
0
0
0
−1 −1 1
0
−1 0
0
0
0
0
0
0
0
1
1
−1 0
1
0
1
−1 0
0
0
0
−1
1
0
1
−1 0
1
0
1
1
1
0
0
1
0
1
3 Design of Parallel Genetic Algorithm Based on TOC 3.1 Design of Parallel Genetic Algorithm In genetic algorithms, a large number of and repeated calculations of fitness values are required. The calculation of fitness is undoubtedly one of the important factors affecting the running time of genetic algorithms. For the parallel implementation of genetic algorithms, the parallel calculation of fitness values is an important factor. Very good path. Without changing the basic structure of the genetic algorithm, we maintain the master-slave model of sending letters to a population, and give the framework of the parallel genetic algorithm based on TOC, as shown in the Fig. 1. The master-slave parallel genetic algorithm is to calculate the fitness value in parallel, and the genetic operations such as selection, crossover, and mutation are executed serially to maintain a population.
Research and Implementation of Parallel Genetic Algorithm
775
Fig. 1. Flow chart of parallel genetic algorithm based on TOC.
3.2 Design of Parallel Genetic Algorithm Based on TOC Combined with the above analysis, the specific implementation steps of the algorithm are as follows: Step 1: Initialization. Randomly generate an initial population of individuals, and each individual entering the code corresponds to an initial solution of the optimization problem to be solved. Step 2: The M individuals are randomly allocated to n populations, and the TOC constructs n processors to process them separately. Step 3: Calculate the fitness value in parallel. The n processors calculate the fitness value of the corresponding n populations in parallel. The calculation of the fitness value requires the user to construct the fitness value according to the problem. Generally, the fitness function can be constructed according to the objective function of the problem to calculate the fitness value. Step 4: Collect the population to become a population, perform genetic operations such as selection, crossover, and mutation on the population, and produce offspring populations. Step 5: Check the stopping condition of the algorithm. The stopping condition of genetic algorithm generally adopts setting the maximum number of iterations or the number
776
H. Cui et al.
of fitness function evaluations. If the stopping condition is met, the execution of the algorithm is stopped, the optimal individual is decoded to obtain the required optimal solution, and the algorithm ends, otherwise, it goes to the second step to repeat the iteration process. Each evolutionary process produces a new generation of populations, and the individuals in the population eventually reach or approach the optimal solution through evolution. The global optimal solution is sent from the TOC to the CPU, and the final result is output. 3.3 Analysis of the Clock Cycle of Algorithm In the stage of calculating the fitness of this problem, a two-layer addition cycle is required. The time complexity on the electronic computer is O(N*N), while the carry-free MSD addition operation on the ternary optical computer requires 3 time periods. When the processor bit is large enough, we can reconstruct the adder. The time complexity required is O(1), where M is the population number and N is the number of cities.
4 Experiment and Analysis 4.1 Experiment Environment Since the TOC application research system is still under construction, this article uses TOC’s simulation experiment platform for experiments. The hardware environment used is: CPU: Intel(R)Core(TM)i5-3230M 2.60 GHz, memory 4 GB, and software environment Microsoft Windows 7; 4.2 Test Problem and Description In order to verify the TOC-PGA algorithm, this experiment chose the traveling salesman problem. The traveling salesman problem is a well-known optimal combination NP problem. It has a very wide application background and has been applied to communication networks, transportation, manufacturing, and logic. And so on many aspects. The traveling salesman problem can be described as: Given a series of cities, return to the starting point and ensure that the distance traveled is the shortest. The distance between two points can be described as (3) d (A, B) = (xA − xB )2 + (yA − yB )2 In the formula, the parameters, and respectively represent the x-coordinate and ycoordinate of city A, and the x-coordinate and y-coordinate of city B.Indicates the distance between city A and city B. The entire total length is expressed as total_f =
n−1 i=1,j=1
Lij + L1n
(4)
Represents the total length, which is the Euclidean distance between cities i and j, and the distance between the ending city and the starting point.
Research and Implementation of Parallel Genetic Algorithm
777
4.3 Experiment Results and Analysis Table 2 shows the comparison results of the running time when the number of cities in the TSP problem is 10, 30, 50, 75. If the average running time of the GA algorithm is Ts and the average running time of the TOC-PGA algorithm is Tp, the speedup is Sp = Ts/Tp, it can be seen from Table 2 that the parallel algorithm using TOC effectively improves the computing performance and acceleration performance. Table 3 shows the comparison results of the optimal solution, average solution and worst solution when the number of cities in the TSP problem is 10, 30, 50, 75. It can be seen that the TOC-PGA algorithm is better than the GA algorithm, which fully illustrates the TOC -PGA algorithm has good global search performance (Fig. 2). Table 2. Comparison of the running time of the two algorithms Number of cities GA/s TOC-PGA/s Speedup 10
0.696 0.638
1.09
30
0.788 0.709
1.11
50
0.764 0.66
1.15
75
0.809 0.672
1.20
Table 3. Comparison of optimization result of TSP problem Number of cities
Algorithms
10
GA
2.69
2.76
2.84
TOC-PGA
2.69
2.78
2.83
30
Best/km
Mean/km
Worst/km
GA
444.21
447.44
461.33
TOC-PGA
444.12
446.27
446.52
50
GA
720.17
756.89
862.94
712.29
765.16
856.13
75
GA
1322.40
1468.93
1609.11
TOC-PGA
1288.38
1389.20
1585.46
TOC-PGA
The TOC-PAG algorithm given in this paper is based on the TOC platform, which has the characteristics of large data volume and large potential for parallel computing. Using the high parallelism of TOC, the TOC-PAG algorithm is improved. On this basis, TOC-PAG differs from the traditional GA algorithm as follows: (1) In order to make full use of the parallel computing potential of TOC, through the parallel analysis of each stage of the GA algorithm, the TOC-PGA algorithm
778
H. Cui et al.
Fig. 2. Optimization results of TSP problem with 30 cities
proposed in this paper adopts hardware parallel mode and is supported by multiple data bits of TOC. (2) It is undeniable that the TOC-PGA algorithm increases the amount of calculation while reducing the cycle of the clock. However, the TOC-PGA algorithm keeps the required hardware resources within the acceptable range of TOC, while shortening the clock cycle. At present, it is difficult for GA algorithms based on electronic computers to implement this parallel scheme. (3) The computing performance of the ternary light is stronger than that of the electronic computer, and it can realize the high-speed computing of a large amount of data.
5 Conclusion This paper designs a parallel GA algorithm based on TOC and gives the implementation steps. The genetic algorithm is improved, and the experiment proves the correctness of the method. The realization method of improving the convergence time and search accuracy of GA algorithm is analyzed from a new perspective. It provides an effective method for solving the optimization problem of high-dimensional complex functions, and provides a new idea for the parallel computing of other bionic algorithms. Acknowledgements. This research was supported in part by the Project of National Natural Science Foundation of China under Grant 61672006, the Key Project of Natural Science Research in Anhui under Grants KJ2017A340, KJ2019A0533 and KJ2019A0535, the innovation team from Fuyang Normal University under Grants XDHXTD201703 and XDHXTD201709, the Natural Science Research Key Project of Fuyang Normal University under Grant 2019FSKJ06ZD, and the Fuyang Normal University Young Talents Key Project under Grant rcxm202004.
References 1. Jin, Y., Ouyang, S., Song, K., et al.: Management of many data bits in ternary optical computers. Sci. China Sin. Inf. Sci. 43(3), 361–373 (2013)
Research and Implementation of Parallel Genetic Algorithm
779
2. Ouyang, S., Peng, J.J., Jin, Y., et al.: Structure and theory of dual-space storage for ternary optical computer. Sci. Sinica Inf. Sci. 46(6), 743–762 (2016) 3. Li, S., Jin, Y., Liu, Y., et al.: Initial SZG file generation software of the ternary optical computer. J. Shanghai Univ. Nat. Sci. 24(2), 181–191 (2018) 4. Li, S., Jiang, J.B., Wang, Z.H., et al.: Basic theory and key technology of programming platform of optical computer. Optik (Stuttg), 327–336 (2019) 5. Xu, Q., Jin, Y., Sheng, Y.F., et al.: MSD division algorithm and implementation technique for ternary optical computer. Sci. China Ser. F-Inf. Sci. 46(4), 539–550 (2016) 6. Li, S., Jin, Y.: Simple structured data initial SZG files generation software design and implementation. In: International Conference of Wireless Communication Sensory Networks, pp. 383–388 (2016) 7. Gao, H., Jin, Y., Song, K.: Extension of C language in ternary optical computer. J. Shanghai Univ. Nat. Sci. 19(3), 280–285 (2013) 8. Zhang, Q., Jin, Y., Song, K., et al.: MPI programming based on ternary optical in supercomputer. J. Shanghai Univ. Nat. Sci. 20(2), 180–189 (2014) 9. Yan, J.Y., Jin, Y., Zuo, K.Z.: Decrease-radix design principle for carrying/borrowing free multi-valued and application in ternary optical computer. Sci. China Ser. F 51(10), 1415–1426 (2008) 10. Jin, Y., Gu, Y.Y., Zuo, K.Z.: Theory, technology and progress of a ternary optical computer’s decoder. Sci. China Ser. F Inf. Sci. 43(2), 275–286 (2013) 11. Peng, J.J., Shen, R., Jin, Y., et al.: Design and implementation of modified signed-digit adder. IEEE Trans. Comput. 5(63), 1134–1143 (2014) 12. Peng, J.J., Shen, R., Ping, X.S.: Design of a high-efficient MSD adder. J. Supercomput. 72(5), 1770–1784 (2016) 13. Jin, Y., Shen, Y.F., Peng, J.J., et al.: Principles and construction of MSD adder in ternary optical computer. Sci. China Inf. Sci. 53(11), 2159–2168 (2010) 14. Shen, Y.F., Pan, L., Jin, Y., et al.: One-step binary MSD adder for ternary optical computer. Sci. China Sin. Inf. Sci. 42(7), 869–881 (2012)
Mathematical Modeling of CT System Parameters Calibration and Imaging Defang Liu(B) , Jia Zhao, Xianchao Wang, and Xiuming Chen Fuyang Normal University, Fuyang 236037, Anhui, China [email protected]
Abstract. The Radon transform based on data translation and its inverse conversion method are proposed to calibrate the parameters of the CT system based on the sample with known structure (called template) and image the sample with unknown structure correspondingly. The experimental results show that CT imaging is clear and accurate, and can be widely applied in the field of artificial intelligence recognition. Keywords: CT · Imaging · Radon transform
1 Introduction CT (Computed Tomography) can use the ray energy absorption characteristics of the sample to perform tomographic imaging of samples of biological tissues and engineering materials without destroying the sample, thereby obtaining structural information within the sample [1]. The working principle of CT is to use Radon transform and its inverse transform to realize projection image reconstruction [2]. The data obtained after testing the CT system can use the Iradon inverse transformation function provided by MATLAB software to obtain an image of the medium density distribution. According to the calibration parameter values of the CT system, the obtained image is translated, rotated and cropped to obtain the original image of the medium absorption rate. Like [3-5]. In addition to being used for medical diagnosis, CT technology also plays an important role in biological research, industrial testing, and geophysical research [6–8]. A typical two-dimensional CT system is shown in Fig. 1. Parallel incident X-rays are perpendicular to the plane of the detector, and each detector unit is regarded as a receiving point and arranged at equal distances. The relative position of the X-ray transmitter and detector is fixed, and the entire transmittingreceiving system rotates 180 times counterclockwise around a fixed center of rotation. For each X-ray direction, measure the ray energy after absorption and attenuation by a fixed two-dimensional medium to be detected on a detector with 512 equidistant units, and obtain 180 sets of received information after processing such as gain. There are often errors in the installation of the CT system, which affects the imaging quality. Therefore, it is necessary to calibrate the parameters of the installed CT system, that is, to calibrate the parameters of the CT system with the aid of a sample with a known structure © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 780–787, 2022. https://doi.org/10.1007/978-981-16-5857-0_100
Mathematical Modeling of CT System Parameters Calibration and Imaging
781
Fig. 1. Schematic diagram of CT system
(called template) and then according to the parameters, image the sample with unknown structure. Ignoring the scattering effect of light, it is assumed that the initial X-ray energy of all transmitters are equal, and the energy of the rays passing through the air will not attenuate. The ray energy emitted by the 512 equidistant detectors does not affect each other and are equal [9–12]. Using MATLAB software to process the CT scan data, a series of images related to CT system parameter calibration and imaging are obtained. Using the Radon transform and Iradon transform methods based on data translation, the image of the unknown medium is obtained [13–17].
2 CT System Parameter Calibration and Imaging Model Establishment and Solution 2.1 Prepare for Analysis According to the Given Calibration Template and Received Information Place two calibration templates composed of uniform solid media on a square tray. The geometric information of the template is shown in Fig. 2. The value corresponding to each point reflects the absorption intensity of that point, which is called “absorption rate”. According to the template and its received information, determine the position of the CT system’s rotation center in the square tray, the distance between the detector units, and the 180 directions of X-rays used by the CT system.
Fig.2. Schematic diagram of the template (unit: mm)
782
D. Liu et al.
Assuming that the ray emitted by the equidistant transmitter is parallel to the long axis of the ellipse and is emitted from bottom to top as the reference, the emitting direction of the transmitter is 0° at this time. According to the geometric information and received information of the template, the observation part has been rotated several times in the special image. It is measured that the 61st rotation of the transmitter to reach a position of 90° relative to the center of rotation and the 151th rotation to a position of 180° relative to the center of rotation are approximate values, so it is necessary to make a difference to the statistical result to reduce the error. 2.2 Linearity Difference According to the analysis of known data, in the 180 directions of rotation of the CT system, the detector measured a total of 180 sets of data projections. According to the known data, the maximum value of each group of data is calculated. Since the local maximum value is not necessarily the local maximum value in the real situation, it is necessary to linearly interpolate the local maxima around 61 and 151 on the abscissa to obtain the local maxima with smaller errors. To this end, first, the data shown in the relationship between the number of rotations and the peak value of the received information is differentiated. The method of linear interpolation is carried out on the basis of the difference analysis to approximate the required data. Suppose it reaches a position of 90° relative to the center of rotation during the x-th rotation. 62 − x x − 61 = 0.0089 − 0 0.0043 − 0 The solution x = 61.3258. Suppose it reaches a position 180° relative to the center of rotation during the y-th rotation, that is, the pattern obtained when the ray emitted by the detector on the right side of Fig. 2 is scanned parallel to the short axis of the ellipse. 152 − y y − 151 = 0.2301 0.04 The solution y = 151.8519. Conclusion: So it reaches a position of 90° relative to the center of rotation at the 61.3258th rotation, and reaches a position of 180° relative to the center of rotation at the 151.8519th rotation. 2.3 The Distance Between the Detector Units and the 180 Directions of X-rays Used by the CT System Analyze the data information received by the template and use MATLAB software to process the data. When the selected transmitter rotates to 180°, the 60th detector detects the center of the small circle, and the 223nd detector detects the center of the ellipse,
Mathematical Modeling of CT System Parameters Calibration and Imaging
783
according to Fig. 2. The data shows that the two circles are separated by 45 mm. Suppose the distance between each detector is L, then L=
45 mm ≈ 0.2761 mm 223 − 60
So the distance between the detector units is about 0.2761 mm. According to the solution of the above difference analysis method, when the transmitter rotates 90° relative to the center of the ellipse and reaches a position of 180° relative to the center of the ellipse, it has rotated 90.5261 times, and it can be inferred that the equidistant detector has rotated about 0.9942° once. Suppose the angle of the starting position is n, then n = 90◦ − (61.3258 − 1) ∗ 0.9942◦ = 30.0241◦ Because the entire transmitter-receiver system rotates counterclockwise from the starting position 179 times around a fixed center of rotation, a total of 177.9618° is rotated. 30.0241◦ + 177.9618◦ = 207.9859◦ Therefore, the 180 directions of x-rays used by the CT system are 30.0241°, 31.0183°,… 207.9859° of these 180 directions. 2.4 The Position of the CT System Rotation Center in the Square Tray Assuming that the center of the ellipse is the rotation center of the equidistant detector, when the equidistant detector plate is parallel to the y axis, the information should be received at the 256th emission point, but it is actually received at 235, so the rotation center of the equidistant detector is offset by the length of 21 equidistant detectors in the y-axis direction, that is, 21 * 0.2761 mm = 5.7981 mm. When the equidistant detector plate is parallel to the x axis, the information should have been received at the 256th emission point, but it is actually received at 223, so the rotation center of the equidistant detector is offset to the left by the length of 33 equidistant detectors on the x axis, that is, 33 * 0.2761 mm = 9.1113 mm. Therefore, the coordinates of the CT system rotation center in the Cartesian coordinate system are (−9.1344 mm, 5.8128 mm), which is point A in the figure, as shown in Fig. 3.
784
D. Liu et al.
Fig. 3. Schematic diagram of the center of rotation, taking the center of the ellipse as the origin and the diameter of the small circle as the positive direction to make a rectangular coordinate system
3 Establishment and Solution of the Model of Parameter Calibration and Imaging of the Medium in the CT System 3.1 Use Radon Transform and Radon Inverse Transform Method to Solve With reference to the establishment and solution of the CT system parameter calibration model, the position of the CT system rotation center in the square tray is determined. By analyzing the source programs of the MATLAB library functions radon() and iradon(), it can be known that the preconditions for the use of these two functions are The center of the square pallet is the center of rotation, and the position of the rotation center is not at the center of the square pallet, so you can’t directly use radon() and iradon(). If you first translate the data, make the solution in the previous section The data obtained from the rotation center of is adjusted to the data with the center of the square pallet as the rotation center, and then the library functions radon() and iradon() are used (Fig. 4). The formula is as follows: motion = −
l ∗ sin(θ0 − θ) J
Where θ0 is the angle between the line between the center of rotation and the center of the square pallet and the straight line in the vertical downward direction, and θ is the angle between the X-ray emitted by the rotating emitter and the line in the vertical upward direction, which are 30.0241°, 31.0183°, ... …, 207.9859°, l is the length of the connection between the rotation center of θ_0 and the center of the square pallet, and J is the distance between adjacent detector units. θ_0, J, l are the results of the previous section, which can be obtained in turn. If the result of motion is positive, the data will be shifted to the left, otherwise the data will be shifted to the right, and the shifted data will be filled with zeros. According to the known data, the MATLAB drawing tool can display the image of the CT system rotated 180 times counterclockwise. According to the image display, the received information reaches the highest peak approximately at the 148th rotation.
Mathematical Modeling of CT System Parameters Calibration and Imaging
785
Fig. 4. Data translation mode
According to the parameter calibration in the previous section, the highest value of received information was reached at 151 times. According to the drawn geometric shape, the ray emitted by the transmitter deviated from the center by approximately 3° when passing through the thickest direction of the sample. From this, the position and geometry of the unknown medium in the square tray can also be determined. 3.2 Verify the Validity of Data Translation Use MATLAB to draw the image of known data A, use Radon inverse transformation to process the data of known data B to obtain the reconstructed data of known data A, and draw the binary image of the reconstructed data, as shown in Fig. 5(a) . In order to compare the results, the binary image of the data reconstructed without translation is drawn at the same time, as shown in Fig. 5(b), and the images of the original data A and B are also given. As shown in Fig. 5, the binary image of the data without translation The binary image is missing the data of the small circle, and the translated data is better to restore the original data. 3.3 Location, Geometry and Absorption Rate of Unknown Medium By translating the data representing the unknown medium, and then using the Radon inverse transformation, the position, geometry and absorption rate of the unknown medium can be obtained, as shown in Fig. 6. The value of the media-related data reflects its absorption strength, that is, the “absorption rate”. The frequency histogram can be used to calculate the distribution of the absorption rate of the medium A data. The medium in Sect. 1 is a single homogeneous material, while the unknown medium in Sect. 2 is composed of two materials with different absorption rates.
786
D. Liu et al.
Fig. 5. Comparison of translation effects
Fig. 6. Reconstruction of unknown medium data after translation
4 Conclusions and Future Work This article establishes a logical and scientific mathematical model that can accurately obtain CT images. On the basis of assumptions, the absorption intensity of the medium in the CT scan is calculated and the specific shape of the medium is obtained. The correct and reasonable data processing method “differential analysis method” is used to reduce the error reasonably. The position of the center point was reasonably deduced, and the radon model was constructed according to the translation reconstruction method, and a reasonable and accurate medium shape was derived accordingly. The disadvantage is that this article assumes many conditions, which have certain deviations from the facts,
Mathematical Modeling of CT System Parameters Calibration and Imaging
787
which affect the experimental results. Assuming that the center of rotation is not a fixed point but an unfixed point, the specific shape of the specific medium can also be deduced according to the model analysis. And the template of this article is too single, it is a square. Assuming that after the template is optimized, the data is also optimized. This model can be extended to more fields, not only CT imaging technology but also artificial intelligence scan recognition. Acknowledgements. This work was financially supported by the Natural Science Fund of Anhui Province Education Ministry under grant NO. KJ2019A0541; Anhui Provincial Quality Engineering Key Project NO.2018JYXM0507; Fuyang Normal University Natural Science Research Key Project NO.2017FSKJ04ZD.
References 1. Zhang, X., et al.: Research on projection reconstruction model based on CT technology. J. Capital Normal Univ. (Natural Science Edition) 6, 32–37 (2019) 2. Wang, X.: CT image Correction and Parallel Reconstruction. Shenzhen University, Shenzhen (2019) 3. Changzheng, C.: Research and application of CT image iterative reconstruction algorithm. China University of Mining and Technology (2019) 4. Nan, M., et al.: Application of X-ray-based computer tomography technology in security inspection. China Secur. Technol. Appl. 5, 59–65 (2019) 5. GB/T 37128–2018: Technical Requirements for X-Ray Computed Tomography Safety Inspection System. China Standard Press, Beijing (2019) 6. Zhen, N.: Research on fast reconstruction algorithm based on region of interest North University of China (2018) 7. Wang, S.: Experimental study of linear scan CT. Chongqing University (2018) 8. Introduction to X-ray CT technology and its security application [EB/OL]. http://www.saf echk.com/showinfo-30-4774-0.html 9. Li, L., Wang, L., Cai, A., et al.: Fast projection decomposition algorithm for X-ray dual-energy CT based on contour fitting. Acta Optica Sinica 8, 309–317 (2016) 10. Weiwen, W., Quan, C., Liu, F.: Relative parallel straight line scan CT filtered back projection image reconstruction. Acta Optics 9, 157–167 (2016) 11. Mao, X.: Two-Dimensional CT Image Reconstruction Algorithm Research. Nanchang Hangkong University, Nanchang (2016) 12. Ma, S.: Research and application of 3D reconstruction algorithm based on CT images. Shandong University (2015) 13. Zhu, X., Huo, J., Yang, D., et al.: The method of single-energy CT to generate dual-energy CT images and the image quality evaluation. Chin. J. Sci. Inst. 35(1), 68–73 (2014) 14. Gong, X., Han, L., Li, H.: Anisotropic Radon transform and its application in multiple suppression. Chin. J. Geophys. 57(09), 2928–2936 (2014) 15. Shi, Y., Wang, W.: Combined suppression of surface multiples based on wave equation prediction and hyperbolic Radon transform. Chin. J. Geophys. 55(09), 3115–3125 (2012) 16. Li, B., Zhang, Y.: Measurement of projection rotation center of X-ray CT system. Opt. Precis. Eng. 19(5), 967–971 (2011) 17. Wei,G.: Application research of discrete XYT model based on gait recognition. Tianjin University (2008)
Design and Implementation of Information Digest Algorithm on a Ternary Optical Computer Junlan Pan1 , Qunli Xie1 , Jun Wang2 , Hengzhen Cui1 , Jie Zhang1(B) , and Xianchao Wang1(B) 1 Fuyang Normal University, Fuyang 236037, Anhui, China 2 Huaibei Normal University, Huaibei 235000, Anhui, China
Abstract. In order to explore the information security field of ternary optical computers, this paper designs a new message digest algorithm based on the characteristics of ternary binary logic operations of ternary optical computers. And after demonstration and analysis, its security is greatly improved compared with the message digest algorithm designed based on binary and ternary values, which gives full play to the advantages of ternary optical computers. Keywords: Ternary optical computer · Message digest algorithm · Information security · Birthday attack
1 Introduction TOC (Ternary optical computer) uses no light state and two polarization directions perpendicular to the light state to represent the three values, with low power consumption, a large number of processors, and the calculation function of each processor bit can be reconstructed Characteristics, these characteristics make the computing potential of TOC far greater than electronic computers [1]. In addition, the ternary optical computer adopts binary ternary logic operations, so it has 19,683 ternary logic operations, while the current electronic computer has only 16 binary logic operations. This makes TOC has a strong logical reasoning ability and a large number of private key distribution capabilities [2–4]. The function of the message digest algorithm is to compress a message sequence of any length into a fixed-length hash value [5]. The algorithm is generally irreversible, that is, the original text cannot be reversed through the encrypted hash value. The application field of message digest algorithm is very wide, such as signature verification server, smart IC card, digital certificate authentication system [4], PCL(program control logic) password card and other fields. However, currently widely used message digest algorithms such as MD(message digest) series and SHA(secure hash algorithm) series are designed based on binary logic operations. If the output length of the message digest password based on binary design is less than 40 bits, according to the birthday attack [6, © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 788–795, 2022. https://doi.org/10.1007/978-981-16-5857-0_101
Design and Implementation of Information Digest Algorithm
789
7] theory Only 1 million random collisions may be cracked, but if a new type of message digest algorithm is designed with TOC’s binary and ternary logic, after the research and analysis of this article, the same 40-bit output hash value, It takes 3 billion random collisions to find a collision with a 50% probability. Therefore, this paper proposes a new message digest algorithm design based on ternary optical computer and ADD_SALT (add salt algorithm), and discusses its strength against birthday attacks in details [8].
2 Message Digest Algorithm Message digest algorithm is an important part of modern cryptography. Its essence is a one-way hash function, so it is also called hash algorithm or hash algorithm. It is mainly used to verify the integrity of messages and is widely used. In the fields of digital signature, secret key management, smart IC, PCL card, etc. The main process of the message digest algorithm can be summarized as: receiving a message sequence of variable length, and outputting the original message as a fixed length message digest after a series of expansion, compression, and iteration steps, which is also called a digital fingerprint [9]. The world’s first directly constructed message digest function is MD4, which constructs a 128-bit digital fingerprint through Boolean functions, shift operations, and modular operations. After that, algorithms such as MD5, SHA-1, RIPEMD, and HAVAL were born. For a long period of time afterwards, the MD5 and SHA series of algorithms issued by the National Cryptographic Bureau of the United States, which are both safe by default, are widely used. Until August 2004, Chinese scientist Wang Xiaoyun announced the results of collision attacks against the four international algorithms MD4, MD5, HAVAL and RIPEMD. In May of the following year, he announced that he had cracked the SHA-1 series of ciphers. Since then, the demand for more secure message digest algorithms has been triggered [10, 11]. Judging the security of a message digest algorithm is mainly based on whether it has the characteristics of antigenic attack, anti-secondary image attack, and anti-collision attack. The specific explanation is to set a message digest function as y = f(x): (1) Ntigenic image attack, for a given message digest y, the message sequence x cannot be reversed, that is, there is no x = f − 1(y); (2) Anti-second pre-image attack, for a given message digest y, there are no two different message digests x1, x2 mapped to y at the same time, that is, there is no f(x1) = y = f(x2); (3) Anti-collision attack, no message digest is specified, there are no two different message digests x1, x2 mapped to y at the same time, there is no f(x1) = y = f(x2). For these three properties, the anti-second preimage attack is a subset of the anticollision attack. The collision attack is a general attack method and can be targeted at any type of message digest function attack. The birthday attack mentioned in this article is also based on the collision attack One way.
790
J. Pan et al.
2.1 Birthday Attack The principle of the birthday attack is based on the birthday paradox. Before introducing the birthday paradox, we need to consider a question: if Xiao Ming walks on the street, what is the probability that he will meet someone whose birthday falls on the same day? It seems we can Calculate like this: a year is 365 days, then the probability of encountering a birthday on the same day is 1/365, which is 0.0027%. This probability seems very small. So let’s think about another question at this time: in a room, at least how many people are needed to make the probability of two people having the same birthday exceed 50%? In fact, only 23 people are needed. This number looks very low and does not fit intuitively. This is why it is called the birthday paradox. The specific derivation process is: Let P(n) be when the number of people is n, there are two people with the same birthday probability. Without considering the lunar year, there are 365 days in a year, then p(n) satisfies: n−1 i 1− P(n) = 1 − 365 i−1
From the mean inequality: ⎛ ⎞
n−1 n−1
i i 1 n−1 ⎝ ⎠ 1− 1− P(n) = 1 − < 1− 365 n−1 365 n−1
i=1
Go to the root sign to get: n−1
1−
i−1
i 365
1 + x) < - > (e(−x) > 1 − x) we can see: 2 n n−1 ) < (1 − (e−n/730 )n−1 ) = (1 − e− n −n /730 ) (1 − 1 − 730 When n2 − n > 730 logc ≈ 505.997, P(n) is slightly greater than 0.5, so we take n2 − n = 506„ and the solution is n = 23.The birthday attack is to use the principle of the above birthday paradox to find the conflicting hash value, forge the message, and make the message digest algorithm invalid.
Design and Implementation of Information Digest Algorithm
791
2.2 Salting Algorithm The salting algorithm is mainly to enhance the hashing degree of the message digest algorithm. The salt value (salt) refers to a random number, which can come from a random number generator, system time, CPU oscillation frequency, etc. Because the salt value is randomly generated by the system, this increases the difficulty of collision, because the attacker not only needs to get the same collision value as the original text, but also needs to know the salt value. The basic process is shown in Fig. 1.
Fig. 1. Salting algorithm flowchart
When encrypting, the hash value and the salt value are hashed together and stored in the database. When decrypting, the hash value to be verified and the salt value in the database are hashed together and compared with the hash value stored in the database come to conclusion.
3 Design of Information Digest Algorithm Based on TOC Compared with electronic computers, optical computers can get rid of the shackles of various metal wires, and can use various lenses to reflect light and transmit signals from a spatial perspective. The so-called free space optical technology. However, it was still affected by electronic computers at that time. The general optical computer design only uses flashing light beams instead of electronic pulses. This idea has many difficulties in the design of components. For example, the fine manufacturing process requirements cannot be applied in daily life. The design concept of ternary optical computer proposes to get rid of the influence of electronic computer.
792
J. Pan et al.
The ternary optical computer has the characteristics of high concurrency and huge number of bits, and its processor can be divided into groups according to the different tasks currently being processed. 3.1 Algorithm Description TOC-512: Algorithm name. M L: message and message length. M’ L’: Filled message and message. mod: Modular operation. s0, s1: Intermediate variables during byte expansion. S0, S1, ch, maj, temp1, temp2: intermediate variables during loop iteration. Z[i]: the i-th extended double word. >>>k: Cyclic right shift operation by k bits. >>k: right shift k-bit operation. : Left assignment operator. +: MSD addition of TOC. a, b, c, d, e, f, g: word register storing the initial value of the iteration. ABCDEFG: Intermediate word register in the iterative compression process. K: Constant required during compression iteration. W , M , N : Binary ternary logic transformation.
For a message with a length of L bits, the TOC-512 hash algorithm undergoes filling and iterative compression, salting, and hash value generation. The hash value is 512 bits long. Table 1 is the truth table required for ternary binary logic operations,The flow chart is shown in Fig. 2. Table 1. Truth Table of logic transformation used in Information Digest Algorithm OP1 0P2 1
−1 1
1
0
1
0
0
0
0
1
1
1
−1 1
0
1
−1 −1 0
0
0
0
0
−1 −1 1
0
−1
−1 −1 0
−1
−1
0
−1 1
0
−1
1
0
1
0
1
−1
Design and Implementation of Information Digest Algorithm
793
Fig. 2. Algorithm flowchart
3.2 Algorithm Security Analysis. From the analysis of the birthday paradox in 1.1, when the value space is set to d (the value space in the birthday paradox is 365), the probability of collision P(n, d) can be expressed as: p(n, d ) = 1 − e
−n(n−1) 2d
√ In this way, when the probability of collision P = 0.5, we can roughly get n≈ d . Therefore, for general binary logic operations, when so for general binary logic operations, when the output hash length is 40, it is considered unsafe, because at this time the value space Is d = 240, so only need n ≈ 106 , that is, there is a probability of more than 50% of a million numbers to find a collision, and for binary and ternary logic operations of the same length (that is, the Counting method), the value space d = 340, then it needs to be n ≈ 3.4 × 109 , which is about 3.4 billion, to have 50% chance of finding a collision. For the message digest function with 512-bit output length in this article, It takes 1.3900845237714473 × 10122 times to be 50% likely to collide. Therefore, in the case of the same length, the message digest function designed based on TOC is more secure than the message digest function designed based on ordinary electronic computers. 3.3 The Scheduling Queueing System This article uses python to simulate the TOC-512 experiment on the PyCharm platform. 1) Fill, first add a bit “−1” after the message M, and then add K 0s, K satisfies (L + 1 + k) mod1024 = 896. 2) Add salt, then add a 128-bit salt value, and add it to the message M’ after the first step of filling. The 128-bit salt value is automatically generated by the TOC random number generator and cannot be interfered by human intervention. In this way, the length of the message M’ is an integer multiple of 1024, which is convenient for the next grouping use.
794
J. Pan et al.
3) Group, group the filled message M’ into 1024 bits: M’ = B0B1……Bn−1, where N = (L + K + 129)/1024 4) To expand, first expand Bn into 140 words. The specific operation method is to directly divide Bn into 16 double words Z[0]–Z[15] according to a 64bit block, and then generate 124 words according to the following rules: FOR i FORM 16 TO 125: s0=(Z[i-15]>>>3) (z[i-15]>>>8) (z[i-15]>>5) s1=(Z[i-3]>>>23) (z[i-3]>>>71) (Z[i-3]>>9) Z[i]=Z[i-7]+s0+Z[i-9]+s1
ENDFOR 5) Compression iteration. At the beginning of the iteration, the initial value of the eight word registers of a, b, c, d, e, f, g is the prime number 23, 29, 31, 37, 41, 43, 47, 53 and the first 64 bits of 16 In hexadecimal representation, these constants provide a set of 64-bit random strings, which can preliminarily eliminate statistical laws in the input data. The Fig. 3 is a screenshot of the program running:
Fig. 3. Program operation diagram
After the loop is over, the values in the word registers a, b, c, d, e, f, g are the hash value with a length of 512 bits. The feature of TOC-512 is that each bit of the output Hash code is a function of all input bits. The multiple complex repetitions of the basic function F make the results sufficiently confused, so that two messages are randomly selected, even if the two messages have similar characteristics, it is unlikely that the same Hash code will be generated.
Design and Implementation of Information Digest Algorithm
795
4 Conclusions and Future Work This paper uses the characteristics of binary and ternary logic operations of the ternary optical computer to design a new message digest algorithm. After analyzing the strength of the classic birthday attack in cryptography and simulation experiments, it is concluded that its security is higher than that, based on The information summary algorithm designed by the binary logic operation is an expansion of the information security field of the ternary optical computer. The next work direction is to test its actual performance on the ternary optical computer. Acknowledgements. This research was supported in part by the Project of National Natural Science Foundation of China under Grant 61672006, the Key Project of Natural Science Research in Anhui under Grants KJ2017A340, KJ2019A0533 and KJ2019A0535, the innovation team from Fuyang Normal University under Grants XDHXTD201703 and XDHXTD201709. And we would like to thank the reviewers for their beneficial comments and suggestions, which improves the paper. And we would like to thank the reviewers for their beneficial comments and suggestions, which improves the paper.
References 1. Jin, Y., Wang, Z.H., Liu, Y.J., Ou, Y.S., Shen, Y.F., Peng, J.J.:Three-value optical computer. Nat. J. 41(03), 207–218 (2019) 2. Li, S., Jiang, J.B., Wang, H.Z., Zhang, H.H.: Basic theory and key technology of programming platform of ternary optical compute. Optik 178, 327–2336 (2018) 3. Jin, Y.: Principle and structure of ternary optical computer. Northwestern Polytechnical University (2002) 4. Wang, X.C.: Task Management and Theoretical Research of Ternary Optical Computer Monitoring System. Shanghai University (2011) 5. Zhang, C.Z.: Implementation and application of SM3 algorithm in hardware encryption module. Inf. Commun. 09, 15–16 (2019) 6. Wang, X.Y., Yu, H.B.: SM3 cryptographic hash algorithm. Inf. Secur. Res. 2(11), 983–994 (2016) 7. Zhang, H.G., Han, W.B., Lai, X.J., Lin, D.D., Ma, J.F,. Li, J.H.: Cyberspace security review. Sci. China: Inf. Sci. 46(02), 125–164 (2016) 8. Zheng, D., Zhao, Q.L., Zhang, Y.H.: A review of cryptography. J. Xi’an Univ. Posts Telecommun. 18(06), 1–10 (2013) 9. Du, M.Z.: Overview of the research and development of cryptography. China Sci. Technol. Inf. 32–34Z (2010) 10. Shen, C.X., Zhang, H.G., Feng, D.G., Cao, Z.F., Huang, J.W.: Overview of information security. Sci. China 129–150 (2007) 11. Huang, Y., Hu, W.D., Chen, K.F.: Research on the classification of network attacks and security protection. Comput. Eng. 131–133 (2001)
Design and Implementation of SM3 Algorithm Based on a Ternary Optical Computer Junlan Pan, Henzeng Cui, Defang Liu(B) , and Xianchao Wang(B) Fuyang Normal University, Fuyang 236037, AnHui, China [email protected]
Abstract. The SM3 algorithm is a domestic commercial cryptographic hash algorithm issued by the State Cryptography Administration of my country, which is widely used in software and hardware security protection. As a new type of computer, the ternary optical computer has the characteristics of a large number of processor bits, high parallelism, and can be reconstructed and grouped for independent use at any time. The most important thing is that the MSD adder it uses does not add high-order data. There is a carry delay. The demonstration analysis and simulation experiments in this article prove that the use of the characteristics of the ternary optical computer to optimize the key path of the SM3 algorithm can make the throughput rate reach 365.714Gbit/s, far exceeding the 526.2Mbit/s of the original plan, which greatly improves SM3 The computational efficiency of this is of great significance for accelerating the promotion of the national secret SM3 algorithm and making domestic passwords the cryptographic module of our information security system. Keywords: SM3 algorithm · Ternary optical computer · MSD addition
1 Introduction In the era of information globalization, the importance of commercial passwords becomes more prominent as the demand for data security protection becomes stronger. However, the current general international cipher algorithms (block cipher algorithm DES (data encryption standard), public key cipher algorithm RSA, digest algorithm MD5 (message-digest algorithm), etc.) are published by the United States Security Agency, and are also the most common commercial algorithms today. The National Secret SM3 algorithm is a domestic commercial cipher hash algorithm promulgated by China’s National Password Administration in 2010, and formally became an international standard in November 2017.Since then, many studies have emerged to improve the efficiency of the SM3 algorithm. From the operation principle of the SM3 algorithm, the addition in its compression and iteration calculation process is the most critical part of the delay of the entire operation process [1–4]. To solve this problem, there are currently methods to optimize the critical path through the CSA (carry save adder) adder [5]; there are also methods such as combining with ARM (advanced RISC machine) © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. J. Jansen et al. (Eds.): International Conference on Cognitive based Information Processing and Applications (CIPA 2021), LNDECT 84, pp. 796–804, 2022. https://doi.org/10.1007/978-981-16-5857-0_102
Design and Implementation of SM3 Algorithm Based on a Ternary Optical Computer
797
processor [6]. However, all of them are limited by the current design principles of electronic computers, so it is difficult to really solve the carry delay problem of addition. So the operation efficiency of SM3 algorithm still has a better optimization direction. Ternary Optical Computer TOC (Ternary Optical Computer TOC) has been proposed by Professor Jin in Shanghai University since 2003. It has the characteristics of low power consumption, large number of processor bits and reconfigurable computing function of each processor bit. These characteristics make the computing potential of TOC much larger than that of electronic computers. The most important thing is that it uses MSD number parallel adder, which implements real multi-bit carriage-free delay addition [7– 9]. Therefore, the focus of this study is to optimize the critical path of the SM3 algorithm to improve the throughput of the SM3 algorithm by combining the parallel computing mode with the MSD adder which has a large amount of data in the ternary optical computer.
2 Related Research 2.1 Country Code SM3 In terms of security, within a suitable range, the longer the length of the generated hash value, the higher the security of the password. Therefore, it is compared with the 128-bit hash value of MD5 and the 160-bit hash value of SHA-1 algorithm. In other words, SM3 generates a 256-bit hash value more securely than MD5 and SHA1. Inside the algorithm, SM3 algorithm reasonably uses word addition operation, and uses P replacement to accelerate the avalanche effect of the algorithm and improve the security of the algorithm. The SM3 algorithm has a clear structure and mostly uses basic operations, so it has the applicability of cross-platform implementation, which is also the basis for its combination with TOC [10]. 2.2 Ternary Optical Computer The first prototype SD16 of a ternary optical computer was born in 2017 at the Optical Computer Research Center of Shanghai University. Unlike the electronic computer using the electric and non-electric states, which represent 0,1 codes, TOC uses the no-light state, and the light state orthogonal to the two polarization directions to represent information [11]. Compared with electronic computers, the advantage of a ternary optical computer is that it has a large number of processor bits and can dynamically reconstruct many small parts to meet different calculation requirements; data addition of any number of bits can be realized in three instruction cycles It can be completed within; at the same time, SZG files can be used, so that programmers can maintain traditional programming techniques [12]. These characteristics make the computer power of ternary optical calculation surpass that of today’s electronic computers. The ternary optical computer has developed through the reduction theory, parallel adder, and dual-space memory structure, and has the possibility of implementing the SM3 algorithm on the hardware, and has passed the fast Fourier transform, cellular automata, and traveling salesman Algorithms and other classic algorithms have verified the computer capabilities of the ternary optical computer and got good results [13].
798
J. Pan et al.
2.3 Modified Signed-Digit—MSD In the 1960s, in order to solve the carry delay problem in electronic computers, redundant representation counting methods appeared. MSD is a kind of redundant representation counting method, proposed by computer scholar Avizienis. In this counting method, because there is no continuous carry of more than two bits, there is no time delay problem in the addition operation process. But because the principle of the electronic computer is based on binary, it is difficult to achieve redundant counting, and the MSD counting method cannot be realized on the hardware of the electronic computer. However, the ternary optical computer does not have this problem of the electronic computer. It can just perform redundant counting and solve the problem of the huge number of operation delay of the ternary optical computer [14, 15]. For any real number A, its MSD expression is as follows: ai × 2i A= i
Among them, i is an integer, and the value range of ai is{1,0,1}, 1 which means −1, Because ai has three values, the MSD representation of a number has many situations, and we can also get binary from 2i Number is a special case of MSD number. For example, for the decimal number 7: (7)10 = (111)2 = (111)MSD = (1001)MSD = (1011)MSD The specific steps of MSD addition are as follows: Step1: align the digits of the two operands a and b, that is, add 0 to the front of the operand with a smaller digit to make the digits of the two numbers equal. Step 2: Perform W and T transformations on a and b at the same time, and then add 0 to the front of the result of T transformation, and the result is recorded as Rt; the result of W transformation is followed by 0, and the result is recorded as Rw. Step 3: Perform W’ and T’ transformations on Rt and Rw, and then add 0 to the result of T’, and the result is recorded as Rt’; the result of W’ transformation is followed by 0, and the result is recorded as RW’. The four logic transformations are T, W, T’, W’. The truth table of these four logic operations is shown in Table 1.
3 SM3 Algorithm Analysis (a) The first step is to fill the encrypted data. Assuming that the input data length is L (L < 264) bit message M, the SM3 algorithm will fill the data length of M to an integer multiple of 512 to do the following message packets ready. The filling method is to first add bit “1” to the end of the message, then add K bits “0”, k is the smallest non-negative integer that satisfies L + K + 1 ≡ 448 mod 521, and finally add a 64-bit bit string at the end. The filled message is set to M’.
Design and Implementation of SM3 Algorithm Based on a Ternary Optical Computer
799
Table 1. Truth table for MSD addition OP1 OP2 T
W
T’
W’
1
−1
0
0
0
0
1
0
1
−1 0
1
1
1
1
0
1
0
0
1
1
−1 0
1
0
0
0
0
0
0
0
−1
−1 1
0
−1
−1
−1
−1 0
−1 0
−1
0
−1 1
0
−1
−1
1
0
0
0
0
(b) The second step is message grouping, grouping M’ by 512 bits: M’ = B(0) + B(1) …..B(n-1) , where n is n = (L + K + 65)/512. (c) The third step is message iterative compression. The compression process includes two processes: message expansion and status update. The iteration method is as follows: FOR i = 0 TO (n-1) V(i+1) ← CF(V(i) ,B(i) ) END V(0) is the initial value of 256 bit IV = 7380166f 4914b2b9 172442d7 da8a0600 a96f30bc 163138aa e38dee4d b0fb0e4e, B(i) is the filled message packet, the result of iterative compression is V(n) , and CF is the compression function. 3.1 Message Expansion Function in Compression Function Expand the message group B(i) of the second step grouping into 132 words: W0, W1 — W67 , W1 ’—W63 ’, the pseudo code is: 1) FOR j = 16 TO 67 Wj ←P1(Wj-16 ⊕ Wj-9 ⊕ (Wj-3