213 3 55MB
English Pages 873 [874] Year 2023
Lecture Notes in Networks and Systems 722
Radek Silhavy Petr Silhavy Editors
Software Engineering Research in System Science Proceedings of 12th Computer Science On-line Conference 2023, Volume 1
Lecture Notes in Networks and Systems
722
Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland
Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas—UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Türkiye Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong
The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science. For proposals from Asia please contact Aninda Bose ([email protected]).
Radek Silhavy · Petr Silhavy Editors
Software Engineering Research in System Science Proceedings of 12th Computer Science On-line Conference 2023, Volume 1
Editors Radek Silhavy Faculty of Applied Informatics Tomas Bata University in Zlin Zlin, Czech Republic
Petr Silhavy Faculty of Applied Informatics Tomas Bata University in Zlin Zlin, Czech Republic
ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-3-031-35310-9 ISBN 978-3-031-35311-6 (eBook) https://doi.org/10.1007/978-3-031-35311-6 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
We are honored to present the refereed proceedings of the 12th Computer Science Online Conference 2023 (CSOC 2023), composed of three volumes: Software Engineering Perspectives, Artificial Intelligence Trends, and Cybernetics Perspectives in Systems. This Preface is intended to introduce and provide context for the three volumes of the proceedings. CSOC 2023 is a prominent international forum designed to facilitate the exchange of ideas and knowledge on various topics related to computer science. The conference was held online in April 2023, using modern communication technologies to provide researchers worldwide with equal participation opportunities. The first volume, Software Engineering Research in System Science, encompasses papers that discuss software engineering topics related to software analysis, design, and the application of intelligent algorithms, machine, and statistical learning in software engineering research. These papers provide valuable insights into the latest advances and innovative approaches in software engineering research. The second volume, Networks and Systems in Cybernetics, presents papers that examine theoretical and practical aspects of cybernetics and control theory in systems or software. These papers provide a deeper understanding of cybernetics and control theory and demonstrate how they can be applied in the design and development of software systems. The third volume, Artificial Intelligence Application in Networks and Systems, is dedicated to presenting the latest trends in artificial intelligence in the scope of systems, systems engineering, and software engineering domains. The papers in this volume cover various aspects of artificial intelligence, including machine learning, natural language processing, and computer vision. In summary, the proceedings of CSOC 2023 represents a significant contribution to the field of computer science, and they will be an excellent resource for researchers and practitioners alike. The papers included in these volumes will inspire new ideas, encourage further research, and lead to the development of novel and innovative approaches in computer science. April 2023
Radek Silhavy Petr Silhavy
Organization
Program Committee Program Committee Chairs Petr Silhavy Radek Silhavy Zdenka Prokopova Roman Senkerik Roman Prokop Viacheslav Zelentsov
Roman Tsarev
Stefano Cirillo
Tomas Bata University in Zlin, Faculty of Applied Informatics Tomas Bata University in Zlin, Faculty of Applied Informatics Tomas Bata University in Zlin, Faculty of Applied Informatics Tomas Bata University in Zlin, Faculty of Applied Informatics Tomas Bata University in Zlin, Faculty of Applied Informatics Doctor of Engineering Sciences, Chief Researcher of St. Petersburg Institute for Informatics and Automation of Russian Academy of Sciences (SPIIRAS) Department of Information Technology, International Academy of Science and Technologies, Moscow, Russia Department of Computer Science, University of Salerno, Fisciano (SA), Italy
Program Committee Members Juraj Dudak
Gabriel Gaspar Boguslaw Cyganek Krzysztof Okarma
Faculty of Materials Science and Technology in Trnava, Slovak University of Technology, Bratislava, Slovak Republic Research Centre, University of Zilina, Zilina, Slovak Republic Department of Computer Science, University of Science and Technology, Krakow, Poland Faculty of Electrical Engineering, West Pomeranian University of Technology, Szczecin, Poland
viii
Organization
Monika Bakosova
Pavel Vaclavek
Miroslaw Ochodek Olga Brovkina
Elarbi Badidi
Luis Alberto Morales Rosales
Mariana Lobato Baes Abdessattar Chaâri
Gopal Sakarkar V. V. Krishna Maddinala Anand N Khobragade (Scientist) Abdallah Handoura Almaz Mobil Mehdiyeva
Institute of Information Engineering, Automation and Mathematics, Slovak University of Technology, Bratislava, Slovak Republic Faculty of Electrical Engineering and Communication, Brno University of Technology, Brno, Czech Republic Faculty of Computing, Poznan University of Technology, Poznan, Poland Global Change Research Centre Academy of Science of the Czech Republic, Brno, Czech Republic and Mendel University of Brno, Czech Republic College of Information Technology, United Arab Emirates University, Al Ain, United Arab Emirates Head of the Master Program in Computer Science, Superior Technological Institute of Misantla, Mexico Research-Professor, Superior Technological of Libres, Mexico Laboratory of Sciences and Techniques of Automatic control & Computer engineering, University of Sfax, Tunisian Republic Shri. Ramdeobaba College of Engineering and Management, Republic of India GD Rungta College of Engineering & Technology, Republic of India Maharashtra Remote Sensing Applications Centre, Republic of India Computer and Communication Laboratory, Telecom Bretagne, France Department of Electronics and Automation, Azerbaijan State Oil and Industry University, Azerbaijan
Technical Program Committee Members Ivo Bukovsky, Czech Republic Maciej Majewski, Poland Miroslaw Ochodek, Poland Bronislav Chramcov, Czech Republic Eric Afful Dazie, Ghana Michal Bliznak, Czech Republic
Organization
Donald Davendra, Czech Republic Radim Farana, Czech Republic Martin Kotyrba, Czech Republic Erik Kral, Czech Republic David Malanik, Czech Republic Michal Pluhacek, Czech Republic Zdenka Prokopova, Czech Republic Martin Sysel, Czech Republic Roman Senkerik, Czech Republic Petr Silhavy, Czech Republic Radek Silhavy, Czech Republic Jiri Vojtesek, Czech Republic Eva Volna, Czech Republic Janez Brest, Slovenia Ales Zamuda, Slovenia Roman Prokop, Czech Republic Boguslaw Cyganek, Poland Krzysztof Okarma, Poland Monika Bakosova, Slovak Republic Pavel Vaclavek, Czech Republic Olga Brovkina, Czech Republic Elarbi Badidi, United Arab Emirates
Organizing Committee Chair Radek Silhavy
Tomas Bata University in Zlin, Faculty of Applied Informatics [email protected]
Conference Organizer (Production) Silhavy s.r.o. Website: https://www.openpublish.eu Email: [email protected]
Conference Website, Call for Papers https://www.openpublish.eu
ix
Contents
Management of Electronic Waste from Matiki Landfill - A Case Study . . . . . . . . Ramadile Moletsane, Imelda Smit, and Janet Liebenberg
1
Actualization of Educational Programs and Content in the Concept of Convergent Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mikhail Deev and Alexey Finogeev
12
Iterative Morphism Trees: The Basic Definitions and the Construction Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Boris Melnikov and Aleksandra Melnikova
22
Special Theoretical Issues of Practical Algorithms for Communication Network Defragmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Julia Terentyeva and Boris Melnikov
31
User Interface Generated Distraction Study Based on Accented Visualization Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anton Ivaschenko, Margarita Aleksandrova, Pavel Sitnikov, and Anton Bugaets
39
Information Security Enhancements of the University’s Automated Information System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dmitry Tarov, Inna Tarova, and Sergey Roshchupkin
45
A Review of Theories Utilized in Understanding Online Information Privacy Perceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . William Ratjeana Malatji, Rene VanEck, and Tranos Zuva
54
Application of the Method of Planning Projects in Smart Cities . . . . . . . . . . . . . . Michaela Kollárová The Concept of a Decision Support System in the Management of Treatment and Accompaniment of the Patient with Bronchopulmonary Diseases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D. R. Bogdanova, N. I. Yusupova, and RKh Zulkarneev Geometric Models of Pursuer Avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dubanov Alexander
68
78
90
xii
Contents
Human Posture Analysis in Working Capacity Monitoring of Critical Use Equipment Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Maxim Khisamutdinov, Iakov Korovin, and Donat Ivanov Linear Complexity of New r-Ary Sequences of Length pn qm . . . . . . . . . . . . . . . . 108 Vladimir A. Edemskiy and Sergey V. Garbar The Problem of Interpretation of Electronic Documents for Long-Term Keeping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Alexander V. Solovyev Prediction of Natural Processes Using a Deep Neural Network Model . . . . . . . . . 122 E. O. Yamashkina, S. A. Yamashkin, Olga V. Platonova, and S. M. Kovalenko Technological Approaches to the Increased Accuracy Photoplethysmographic Measuring System Design . . . . . . . . . . . . . . . . . . . . . . . . . 128 Tatiana I. Murashkina, Sergei I. Gerashchenko, Elena A. Badeeva, Mikhail S. Gerashchenko, Ekaterina Y. Plotnikova, Yuri A. Vasilev, Ivan I. Pavluchenco, and Armenac V. Arutynov A Multi-criteria Method for the Synthesis of Regional and Interregional Tourism Routes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Leyla Gamidullaeva and Alexey Finogeev Influence of Physical and Mechanical Properties of Optical Fibers on the Design Features of a Fiber-Optic Bending-Type Tongue Pressure Sensor on the Palate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 T. I. Murashkina, E. A. Badeeva, A. N. Kukushkin, Yurii A. Vasilev, E. Yu. Plotnikova, A. V. Olenskaya, and A. V. Arutynov A Detailed Study on the 8 Queens Problem Based on Algorithmic Approaches Implemented in PASCAL Programming Language . . . . . . . . . . . . . . 160 Serafeim A. Triantafyllou The Information Field as an Integral Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Victor Tsvetkov, Alexey Romanchenko, Dimitriy Tkachenko, and Mikhail Blagirev Method for Calculating Ionospheric Corrections in Satellite Radio Navigation Using Accumulated Data of Dual-Frequency Measurements . . . . . . . 184 V. L. Tryapitsyn, T. Yu. Dubinko, and Yu. E. Fradkin Information Management of Real Estate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Roman V. Volkov
Contents
xiii
A Complex Method for Recognizing Car Numbers with Preliminary Hashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 Sergei Ivanov, Igor Anantchenko, Tatiana Zudilova, Nikita Osipov, and Irina Osetrova Data Processing by Genetic and Pattern Search Algorithms . . . . . . . . . . . . . . . . . . 209 B. Z. Belashev A Method for Collecting and Consolidating Big Data on the Requirements of Employers for the Competencies of Specialists to Actualize Educational Programs and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Mikhail Deev, Alexey Finogeev, Alexander Grushevsky, and Ivan Igoshin Simulation of a Neural Network Model Identification Algorithm . . . . . . . . . . . . . 229 Alexander Terekhov, Evgeny Pchelintsev, Larisa Korableva, Svyatoslav Perelevsky, and Roman Korchagin The Narrow-Band Radio Signal Reception Features in the Case of the Unknown Initial Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Anton Gudkov, Baktybek Karimov, Alexander Makarov, Alexandra Salnikova, and Zhazgul Toibaeva Asymptotic Exact Formulas for Characteristics of the Joint Maximum Likelihood Estimates Under a Partial and Complete Violation of the Regularity Conditions of the Decision Determining Statistics . . . . . . . . . . . 248 Oleg Chernoyarov, Alexander Zakharov, Larisa Korableva, and Kaung Myat San Geometric Modeling of the Surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 E. A. Belkin, V. N. Poyarkov, and O. I. Markov Implementation of Distributed Simulation in the MTSS System . . . . . . . . . . . . . . 288 Sergey Rudometov, Victor Okolnishnikov, and Sergey Zhuravlev Development of Simulation Models of Cloud Computing Infrastructures with Automatic Scaling Based on Thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 Igor Botygin, Anna Sherstneva, Vladislav Sherstnev, and Valery Tartakovsky Application of Combinatorics to Calculate the Number of Cases of the Monotonic Stability in All Variables in a Discrete Dynamical System . . . . 305 V. V. Lyubimov
xiv
Contents
Application of a Multi-factor Approach in Machine Learning Algorithms for Post Process Mining Analysis and Problem Detection in Bank . . . . . . . . . . . . 314 Andrey A. Bugaenko Application of Virtual Reality in Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Georgios A. Moutsinas, Jorge Alberto Esponda-Pérez, Biswaranjan Senapati, Shouvik Sanyal, Indrajit Patra, and Andrey Karnaukhov A 3D Visualization of Zlín in the Eighteen–Nineties Using Virtual Reality . . . . . 327 Pavel Pokorný and Hana Šilhavíková Machine Learning Methods and Models for Recognizing Lung Inhomogeneity from Computed Tomography Scans . . . . . . . . . . . . . . . . . . . . . . . . 339 G. R. Shakhmametova, E. S. Stepanova, A. A. Akhmetshin, and R. Kh. Zulkarneev Formation of Professional Readiness of the Future Teacher for Pedagogical Activities in the Context of Digitalization of Education . . . . . . . . . . . . . . . . . . . . . 351 Vladimir Mezinov, Marina Zakharova, Olga Povalyaeva, Elina Voishcheva, Irina Larina, and Natalya Nekhoroshikh Statistical Study of Factors Affecting the Risk of Lending by Microfinance Institutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 Yu. V. Frolov, T. M. Bosenko, and E. S. Konopelko Readiness for Asset Tracking Systems in Public Schools from Disadvantaged Areas in Gauteng, South Africa . . . . . . . . . . . . . . . . . . . . . . . . 370 Tlangelani Promise Mlambo, Rene Van Eck, and Tranos Zuva Digitization of University Diplomas in the European Union . . . . . . . . . . . . . . . . . . 382 Martin Záklasník, Radim Farana, and Szymon Surma Numerical Simulation of Quadrupole Induced Optical Transverse Anti-trapping Effect in Gaussian Beam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 Denis Kislov and Vjaceslavs Bobrovs Existence of the Hybrid Anapole for Si Conical Nanoparticles . . . . . . . . . . . . . . . 397 Alexey V. Kuznetsov and Vjaceslavs Bobrovs Analytical Model of Reflection of the Individual Multipoles Fields from a Flat Substrate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 Denis Kislov, Dmitrii Borovkov, and Vjaceslavs Bobrovs
Contents
xv
Proposal of a Multi-display System for Embedded Applications . . . . . . . . . . . . . . 409 ˇ Matúš Neˇcas, Juraj Dudák, and Gabriel Gašpar Web-Based Serious Game for Improving Problem-Solving Skills Through Introductory Computer Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 Georgi Krastev and Valentina Voinohovska Analyzing Air Pollution in China, Ecuador, and the United States by Means of GH and HJ Biplots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431 Mateo Coello-Andrade, Melissa Quiñonez-Londoño, Isidro R. Amaro, and Kevin Chamorro Distinguishing Characteristics of Digital Format for Students of New Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 Elena Borisova Transfer of Liquid Measurement Technologies: Analysis Through Patent Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 A. S. Nikolaev, A. V. Sennikova, A. A. Antipov, and T. G. Maximova Special Controller Design for Implementation the Desired Motion of the Moving Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 Mikhail N. Smirnov, Maria A. Smirnova, Nikolay V. Smirnov, and Tatiana E. Smirnova CrossQuake: A Cross-Correlation Code for Detecting Small Earthquakes in the Frequency Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488 Carlos Ramírez Piña, Christian R. Escudero, J. A. Hernández-Servín, and Gerardo León Soto Remote Sensing of the Earth in Precision Agriculture. Tasks and Methods . . . . . 498 Ilya M. Mikhailenko and Valeriy N. Timoshin On the Naming of Database Objects in the SQL Databases of Some Existing Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534 Erki Eessaar Entropy and Uncertainty in the Study of Electroencephalograms . . . . . . . . . . . . . 551 M. A. Filatov, G. V. Gazya, A. Yu. Kukhareva, and L. S. Chempalova Possibilities of Generating Dynamic Chaos by Biosystems . . . . . . . . . . . . . . . . . . 562 Gennadiy V. Gazya, T. V. Gavrilenko, V. A. Galkin, and V. V. Eskov
xvi
Contents
Synthesis of an Adaptive Algorithm for Estimating Spectrozonal Video Information of Images Based on the Expansion Principle . . . . . . . . . . . . . . . . . . . . 569 V. V. Vasilevsky Systematic Review on Bisonets for Linking Two Domains . . . . . . . . . . . . . . . . . . . 577 Elias Mbongeni Sibanda and Tranos Zuva A Multicriteria Model Applied to Customer Service in an E-Commerce Startup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586 Paulo Victor Xavier Pinheiro, Plácido Rogério Pinheiro, and Cristiano Henrique Lima de Carvalho Towards a Search and Navigation Platform for Making Library Websites Accessible to Blind and Visually Impaired People . . . . . . . . . . . . . . . . . . . . . . . . . . 595 Asim Ullah, Shah Khusro, and Irfan Ullah A Web Application for Moving from One Spot to Another by Using Different Transport—A Scenario of the Research . . . . . . . . . . . . . . . . . . . . . . . . . . 608 Kamelia Shoilekova, Boyana Ivanova, and Blagoy Blazhev Web Implementation for Sharing Personalized E-Learning Content through a Virtualization System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613 Donika Valcheva, Teodor Kalushkov, Georgi Shipkovenski, Rositsa Radoeva, and Emiliyan Petkov Controlling the Equipment State Throughout the Industrial Life Cycle of the Product Using Digital Twin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624 A. S. Soldatov and E. S. Soldatov Development of a Fuzzy Set Method for Assessing the Feasibility of Projects of Creating Information - Management Systems Under Conditions of Fuzzy Primary Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632 Salaxaddin Yusifov, Imran Bayramov, and Rauf Mayilov The Success of UWC Library Information System from a Students’ Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639 Robyn Forbes and Agunbiade Olusanya Yinka Spider Colony Optimization Algorithm: Bio-Heuristic for Global Optimization Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661 Sergey Rodzin and Lada Rodzina
Contents
xvii
Locust Swarm Optimization Algorithm: A Bio-Heuristic for Global Optimization Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670 Sergey Rodzin, Elmar Kuliev, Dmitry Zaporozhets, Lada Rodzina, and Olga Rodzina On the Reflected Waves Contribution into the Robustness of Microseismic Events Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679 Sergey Yaskevich Investigation of Transient Luminous Events Using Spectrophotometer Data . . . . 685 A. Nurlankyzy, S. Tolendiuly, and A. Kulakayeva Upper Bounds on Graph Diameter Based on Laplacian Eigenvalues for Stopping Distributed Flooding Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697 Martin Kenyeres and Jozef Kenyeres Assessment of Risks of Negative Ecological Impacts on the Environment During Operation of Gas Wells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712 Nika Amshinov, Olga Vybornova, and Iskandar Azhmukhamedov Portable Device for Soil Respiration Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723 Evgeny Golovinov, Dmirii Aminev, Samuhaer Mulatola, Ludmila Bunina, Sergey Bikovsky, Alexander Alexandrov, and Andrey Titov New Validation of Radar and Satellite Precipitation Estimates Against Rain Gauge Records in Slovakia with the Application of Blurring . . . . . . . . . . . . 737 Ján Mojžiš, Martin Kenyeres, Marcel Kvassay, and Ladislav Hluchý Automation of Data Processing for Patient Monitoring in the Smart Ward Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746 Dmitriy Levonevskiy, Anna Motienko, Lubov Tsentsiper, and Igor Terekhov Finite Element Calculation of the Limiting Pressure for Rupture of Capsules with an Active Substance in the Crack Detection System of Gas Turbine Blades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 757 Ivan K. Andrianov, M. Kara Balli, Miron S. Grinkrug, and Nikita A. Novgorodov DIZU-EVG – An Instrument for Visualization of Data from Educational Video Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769 Yavor Dankov
xviii
Contents
Hybrid Machine Learning Technique for Crop Health Monitoring and IoT Based Disease Detection Using Optimal Feature Selection and Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 779 Eisha Akanksha, Neeraj Sharma, and Jyoti Sehgal Development of Data Storage and User Interface in the Clinical Decision Support System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808 G. R. Shakhmametova, A. A. Evgrafov, and R. Kh. Zulkarneev Quality in Use Evaluation Model Based on ISO/IEC 25022 for Human Talent Management Information Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817 Leandro Jair Burgos Robles and Fredy Abel Huanca Torres Software Cost Estimation Using Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . 831 Robin Ramaekers, Radek Silhavy, and Petr Silhavy Structural Components of Intellectual Assets of the Territory . . . . . . . . . . . . . . . . 848 Gul’na z S. Gabidinova Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855
Management of Electronic Waste from Matiki Landfill - A Case Study Ramadile Moletsane1(B) , Imelda Smit2 , and Janet Liebenberg2 1 Vaal University of Technology, Vanderbijlpark, South Africa
[email protected]
2 North-West University, Mahikeng, South Africa
{imelda.smit,janet.liebenberg}@nwu.ac.za
Abstract. Matiki is a landfill located in the Gauteng province of South Africa where workers, called waste pickers, gather and salvage waste of all kinds to be sold at recycling centres. The community and businesses often use this area to dispose of their waste ranging from solid waste to hazardous waste. A component of this waste includes electronic waste, where waste pickers often engage without proper personal protective equipment. The focus of this paper is to determine how electronic waste is managed at the Matiki landfill and to make recommendations for improving eelctronic waste management and the lives of the waste pickers. A qualitative case study in which data was gathered using semi-structured interviews is the underpinning method. Purposeful sampling was used to identify the participants. Qualitative content analysis was used to analyze the data. Lack of awareness, unemployment, child labour and low level of literacy were factors found to encourage poor management of electronic waste. The study suggests that organizations that dump their electronic waste at this site should be held accountable for their waste through the enforcement of schemes such as extended producer responsibility and retailer buy-in. Intensive educational campaigns to create awareness about the dangers of electronic waste to the environment and health should be implemented. Keywords: Awareness · COVID-19 · Electronic waste · Landfill · Waste pickers
1 Introduction The earth is badly impacted by electronic waste, which is quickly becoming a global issue. Electronic waste is the term used to describe any electrical or electronic device or component that has been dumped by the owner with no intention of being reused [1]. Frequently, the term “electronic waste” refers to the waste of electrical and electronic equipment [2]. Users in residential areas, manufacturing facilities, educational institutions, and numerous industrial locations are among the sources of electronic waste [3]. The environment is gradually contaminated by electronic waste [4, 5]. The dangers of electronic waste are not well known [6]. Studies from South Africa and elsewhere indicate that the difficulties with electronic waste are exacerbated by a lack of knowledge © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 1–11, 2023. https://doi.org/10.1007/978-3-031-35311-6_1
2
R. Moletsane et al.
about the environment-harming impacts of electronic waste [7]. In 2020, the global production of electronic waste reached 55.5 million tons. On average, the volume is rising by 2 million tons per year [8, 9]. Electronic waste generation is anticipated to be 61.3 million tons in 2023 and 63.3 million tons in 2024, respectively [9]. Consequently, it is essential to put policies into place that encourage environmentally responsible management of electronic waste and create awareness [10]. The purpose of this study is to explore how electronic waste in a landfill from a local municipality is managed. Based on the findings of the study suggests solutions. The widespread lack of awareness, mismanagement and the vast volume of ever-increasing electronic waste motivates this study. In subsequent sections, the extant literature is discussed (§2), the methodology used in the empirical study (§3), the findings of the study (§4) and its discussion (§5). The paper is then concluded (§6).
2 Literature Background The European Union considers electronic waste (e-waste) as a broad subject and categorised it into ten sub-divisions [1, 2]. Electronic waste is often referred to as the Waste of Electrical and Electronic Equipment (WEEE) [3]. This paper focuses on the third sub-division that includes “IT and Telecommunications equipment”; it is labelled “Information and Communication Technology” (ICT) [2, 4]. Electronic waste is loosely discarded surplus, obsolete, broken EEE [6]. In this study, e-waste is defined as any ICT product that has been disposed or discarded by the owner without any intention of reuse, regardless of its current working status [1]. Electronic waste contains valuable, worthless and toxic substances [7]. Materials considered valuable from e-waste constitute over 60% and include gold and silver. Valuable e-waste have a market potential of US $64.5 billion yearly [10]. According to Ichikowitz and Hattingh [2], when these materials are collected responsibly it could generate substantial job opportunities, while at the same time ensuring the sustainability of the environment. Plastic and hazardous materials account for about 30% and 27% of e-waste, respectively. Toxic materials include mercury, cadmium, and lead [10, 11]. Most electronic devices such as smartphones and computers require numerous mined materials to develop sophisticated circuit boards. For example, the average smartphone requires 72 elements found in the periodic table [8]. In this context, the extraction of raw materials requires two to ten times more energy than their recycled equivalents [8]. When these toxic materials are not managed properly, they pose both environmental and health hazards. Environmental dangers like air, soil and water pollution cause devastating effects on the environment. When e-waste is buried into or dropped onto the soil in landfills it tends to leach out and contaminate underground water sources further harming aquatic life, as well as animal and human health. Improper disposal of e-waste through activities such as informal recycling or burning releases toxic fumes that have been linked “to pulmonary and cardiovascular disease” [9]. Electronic waste is one of the 21st-century challenges faced by mankind. It is a global problem and is growing at an unprecedented rate [12]. In 2019 e-waste reached a high of 53.6 million metric tons and only slightly over 17% was collected and recycled appropriately [13]. It is estimated that e-waste will increase by 120 million
Management of Electronic Waste from Matiki Landfill
3
tons yearly by 2050, of which only 20% would be recycled [14]. The poor collection rate of e-waste is noted as a challenge for the countries in the regions of Latin America, Asia, Africa, and Oceania [15]. The problem of e-waste management in non-industrial nations or developing countries has been around for some time, and the expanded utilization of electronic gadgets is probably going to intensify the challenge during and after the COVID-19 pandemic. Asia is the largest e-waste-producing continent but only 15% of e-waste is recycled yearly. E-waste recycling and processing are highly regulated by both national and international laws. Any activity occurring outside the scope of these national and international laws is considered informal or illegal [14]. Unregulated recycling activities are characterized by acid-based processing and open-pit burning which result in a hazardous effect on the environment and health [16]. The e-waste produced annually is worth over $62.5 billion, more than the GDP of most countries. There is 100 times more gold in a ton of e-waste than in a ton of gold ore [17]. A factor that drives this rate of e-waste includes planned or programmed obsolescence. This happens when a product is designed to have a shorter life so that consumers are required to repeat purchases. So, a short life span of electronic and electric equipment, high demand for it, large-scale consumption, difficulty to repair and improper recycling cause the problem. Factors that contribute to the barrier to repairs include access to trusted and qualified repairers, complex product designs that make repair difficult, scarce spare parts and lack of repair information or manuals and attitudes against repair [8]. Finland is considered “to have among the best e-waste management plans in the world, with an environmentally conscious population…” [2]. According to Ichikowitz and Hattingh [2], e-waste is one of the waste streams growing at an unprecedented rate in South Africa, yet collection rates are disappointing. Although South Africa has an established recycling industry, and boasts collection rates of 52% for paper and 63% for tin-plate steel [13], the collection rate of e-waste is estimated at a mere 11% [13]. A big chunk of e-waste generated worldwide is diverted for landfilling as a common disposal practice [8]. Landfilling is regarded as a standard method for disposing of waste around the world because it is relatively cheaper than safer methods such as formal recycling. However, poorly constructed landfills can cause more harm to the environment and health. According to Obodoeze et al. [20]. Electronic waste accounts for about 2% of America’s total trash in landfills, but sadly it represents 70% of overall toxic substances. The major problem with landfills is leaching, and water leaching can transport toxins from e-waste into the groundwater or nearby sources of water for consumption or irrigation and thus pollute the food chain [18]. The proliferation of current improper e-waste management practices is fueled by a lack of awareness, reluctance to hold producers accountable for their share of e-waste they have generated or inaction by the government [18]. The alternative to hold producers accountable for their e-waste could be achieved through the enforcement of schemes such as extended producer responsibility (EPR).
4
R. Moletsane et al.
3 Methodology The methodology used to guide the empirical study is discussed by explaining the design of the study, its context, the participants partaking, how data was collected, the analysis of the collected data and the ethical aspects considered. 3.1 Study Design This qualitative study used the case study approach. Case studies give the researcher the ability to perform a thorough investigation of complex phenomena within some specific context [19]. 3.2 Study Setting Matiki is a landfill where the municipality and some private companies dump their waste. The type of waste dumped here ranges from plastics, scrap metals, and e-waste. The main kinds of e-waste include fridges, electric stoves, computers, and other electricpowered appliances. The waste pickers’ objective is to salvage any valuable materials from e-waste that are sold to the local scrap yards destined for recycling centres around the Gauteng province. The landfill has been operational for more than ten years. Most of the waste pickers dwell here in makeshift houses. Basic services such as water and sanitation are nonexistent for dwellers. 3.3 Participants Selection Participants were purposefully selected. The researcher found and chose people who specialize in handling the e-waste stream of waste at Matiki. They are removing valuable materials by using chisels or by burning material – to sell to nearby scrap dealers. Participants were also chosen based on their availability, willingness to engage, and ability to talk about their work experience. According to Palinkas, Horwitz [20], homogenous purposive sampling simplifies analysis and minimizes variation. The participants were women and youth (Table 1). 3.4 Data Collection The data collection process took place during the Level 1 lockdown in South Africa. The data was gathered using semi-structured interviews as per suggestion from Fylan [21]. The data collected was recorded using a smartphone. During the data-gathering process, all the COVID-19 regulations as prescribed by the South African health department and the South African centre for non-communicable diseases were strictly adhered to.
Management of Electronic Waste from Matiki Landfill
5
Table 1. Demographic data of the participants. Participant code
Response type Age
Gender
Education
Place of Origin
P1
Face-to-face interview
45–55
Female
Std 2/grade 4
Free State
P2
Face-to-face interview
35–45
Female
Std 4/Grade 6
Gauteng
P3
Face-to-face interview
25–35
Female
Grade 10
Free State
P4
Face-to-face interview
35–45
Female
Std 6/Grade 8
Kwazulu-Natal
P5
Face-to-face interview
45–55
Female
Not attended school
Lesotho
P6
Face-to-face interview
25–35
Female
Plumbing certificate & Grade 11
Mpumalanga
P7
Face-to-face interview
25–35
Female
Grade 12
Limpopo
P8
Face-to-face interview
25–35
Female
Grade 8
Limpopo
P9
Face-to-face interview
45–55
Female
Equivalent to Grade 4
Lesotho
P10
Face-to-face interview
55–65
Female
Illiterate
Gauteng
P11
Face-to-face interview
35–45
Female
Std 8/Grade 10 Free State
3.5 Data Analysis The recorded data was transcribed verbatim [11]. Qualitative content analysis was used to analyze data as suggested by Bengtsson [22]. A content analysis method is unique in that it is both a quantitative methodology and a qualitative methodology [23, 24]. The Archive for Technology, Lifeworld and Everyday Language.text interpretation (ATLAS.ti) software for qualitative data was used to systematically assist in the data analysis process [25, 26]. 3.6 Ethical Considerations An informed consent form was signed by the participants before each interview. The participants were informed and aware that they are allowed to withdraw their participation at any stage of the study. The study was purely for academic purposes and the confidentiality of the participant data was guaranteed. Information on COVID-19 regulations was communicated to the participants before the start of all the interviews and the regulations were followed during the interview.
6
R. Moletsane et al.
4 Findings According to Sutton and Austin [27], theming is a process of systematically grouping codes from one or across multiple transcripts to present the findings of qualitative inquiry. After completion of the data analysis process, the following themes were generated: unemployment and joblessness, low levels of literacy, awareness, lack of regulatory enforcement and child labour. • Theme 1: Unemployment and joblessness Some of the participants’ response corresponding to this theme was: P1 said, “We work here because we are no job opportunities around”. P2 indicated that “I tried to look for the job around, but it is now difficult than ever due to COVID-19 pandemic”. P4 relayed, “We were laid off 2 months ago due to the COVID-19 pandemic. Our company closed for business as we were no longer generating profit because of the hard lockdown”. From the participants’ perspectives, it is self-telling that the lack of job opportunities and unemployment forced people to work at this landfill. COVID-19 has brought about a widened gap of global inequality, as half a billion people are currently retrenched or under-employed. The impact is mostly felt in low-income or global south countries [14]. P6 posited, “I had to work here although I am skilled plumber, employers out there prefer cheap labours from the neighbouring countries. They exploit them. I wish government could come to the party and resolve this issue”. The occurrences per participant are relayed in Table 2. Table 2. Code occurrence: Unemployment and joblessness. Content
Participants P1
P2
P3
P4
P5
P6
P7
P8
P9
P10
P11
Total
Lack of job opportunities
1
1
0
1
1
1
1
0
1
0
0
7
• Theme 2: Low levels of literacy Some of the participants’ views that go with thi theme were: P5 said that “I am a migrant worker from Lesotho. I came here to look for job. Unfortunately, I did not get formal education and back home my job was to plough the fields. I can’t read and write. So, here at least I can put food on the table with the little that I earn”. Indeed, e-waste can be a source of income and recycled materials to produce other products. E-waste can create much-needed employment opportunities and be a source of income. About 25 jobs can be created per 1000 tons collected in South Africa [2]. P9 indicated that, “It is difficult to get employment out there if you have no education”. The opinion of P10 was that “Matiki is right for me since I don’t have education. I can hardly write a letter, who is going to hire me? Ah…” A low level of literacy and education seem to be the reason for these workers to find themselves working here. Computer literacy is also one of the factors in not getting a job elsewhere.
Management of Electronic Waste from Matiki Landfill
7
According to P7, “I have a general education certificate, but they always ask for computer knowledge which I am lacking. My goal is to save some money so that I can enrol for computer certification. Maybe some doors after certification will open for me”. Rightly so, most employers require computer skills. The occurrences per participant are relayed in Table 3. Table 3. Code occurrence: Low levels of literacy Content Low levels of literacy/education/skills
Participants P1
P2
P3
P4
P5
P6
P7
P8
P9
P10
P11
Total
1
1
1
1
1
0
0
1
1
1
1
9
• Theme 3: Awareness Some participants relayed crucial information regarding the theme: P10 said that “We use chisels and hammers to extract the valuables from e-waste”. P9 relayed that “To recover copper from cables we burn the materials. This is the quickest way to get copper from cables”. Most of the workers were found not to have personal protective equipment. In developing countries, men, women and children engage in the recovery of metals without protection and safeguard measures [16]. According to P11 “We use our own safety boots but not everyone does have them. If you don’t have safety gloves or a mask it is a problem. It is sad to see children barefooted even during winter seasons. But for me I think gloves and masks make you feel uncomfortable”. The participants’ responses indicated that they are not aware of the health hazards caused by e-waste, since they either use very basic protection, or none at all. P8 stated that “Not only local companies dispose of their waste here. The township residents come along with their young ones to dump their broken or unwanted electronic gadgets”. The occurrences per participant are relayed in Table 4. Table 4. Code occurrence: Awareness Content
Participants P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 Total
Adequate knowledge of e-waste 0 impact
0
0
1
0
1
0
1
1
1
1
6
8
R. Moletsane et al.
• Theme 4: Child labour Some of the participants’ views corresponding to the theme were: P5 indicated that, “Most children are from impoverished families, wherein there is no one working. The children always claim that they have no food at home and left school to make some money from e-waste”. P3 argued that “Most of these children who are supposed to be at school are only here to feed their drugs addiction”. When asked to elaborate, P3 relayed that “The drug called Nyaope has destroyed our children. They come here to scavenge the waste to buy the drug. Law enforcers seem to lose the battle on drugs”. P4 said that “children who are supposed to be at school come here due to the money we get on a good day. It is plenty”. The occurrences per participant are relayed in Table 5. Table 5. Code occurrence: Child labour Content
Participants P1
P2
P3
P4
P5
P6
P7
P8
P9
P10
P11
Total
The use of children
0
0
1
1
1
0
0
1
0
0
1
5
• Theme 5: Lack of regulatory enforcement Some of the participants’ opinions corresponding to the theme were: P6 said that “sometimes we see municipal trucks dumping the general waste. If you are lucky, you can find some broken gadgets and sell them to scrap yard owners”. Municipalities have a legal obligation to practice waste management activities such as removal storage, transportation and final disposal of waste appropriately [28]. Ichikowitz and Hattingh [2] are of the opinion that for South Africa to take advantage of this growing source of income or economic growth, more work needs to be done in terms of improving e-waste collection processes. The occurrences per participant are relayed in Table 6. Table 6. Code occurrence: Lack of regulatory enforcement Content
Participants P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 Total
Lack of regulation enforcement 0
0
1
1
1
1
0
1
0
0
1
6
5 Discussion It is evident that the workers at Matiki landfill are engaging in informal recycling. Formal recycling of e-waste is lacking, yet very crucial – not only from the point of view of safe waste treatment but also from the view of creating livelihoods due to the recovery of
Management of Electronic Waste from Matiki Landfill
9
valuable materials. This is notwithstanding the fact that landfilling is regarded as a poor e-waste management strategy by the literature and practice. Unfortunately the recyclers are engaged in burning materials that release toxic and harmful fumes – to pleople’s health and the environment. Often they scavenge e-waste without personal protective equipment. The results indicated that the drivers of these unsafe and improper activities are unemployment, lack of awareness and low literacy levels among others. The municipality is responsible for garbage collection for townships surrounding the Matiki landfill. When residents take their unwanted gadgets and general garbage to the landfill, it tells that the local municipality is not rendering the service as it should per legislation. The organizations that dump their e-waste at this site should be held accountable for their waste through the enforcement of schemes such as extended producer responsibility (ERP) and retailer buy-in. Extended producer responsibility is an environmental protection strategy that “makes the manufacturer of the product responsible for the entire life cycle of the product and especially for the take back, recycling and final disposal of the product” [29]. The benefits of ERP entail changes to product designs that could reduce the number of toxic materials in new products, and improve recyclability, reusability and downsizing thereof [11]. This is also the responsibility of the local municipality. Considering the threats posed by e-waste, experts and information and communication technology (ICT) professionals are now turning to green information technology (GIT) to develop new ways to manage ICT activities in a manner friendly to the environment. The impact of electric and electronic equipment on environmental management can be classified into the following three areas: green design and manufacturing, green use, and green disposal [12, 13]. Intensive educational campaigns to create awareness about the dangers of e-waste to the environment and health should be implemented.
6 Conclusion Electronic waste contains valuable, worthless and toxic material. When e-waste is not managed in a manner friendly to the environment it tends to be harmful to both the environment and human health. Appropriate management of e-waste could be a good source of livelihood, and benefit humans’ health and the environment alike. The municipality as one of three spheres of government has a legal obligation to observe and practice legislation on e-waste, yet in this study, the municipality is found lacking. The study suggests that organizations that dump their e-waste at this site should be held accountable for their waste through the enforcement of schemes such as ERP and retailer buyin. Intensive educational campaigns to create awareness about the dangers of e-waste to the environment and health should be implemented. Initiatives driven by the local municipality, or the community may be valuable contributions – to educate and create business opportunities. Such innovations may provide structure and create opportunities for an improved income stream, as well as improved re-cycling of e-waste. Using proper recycling techniques will also ensure a safer environment.
10
R. Moletsane et al.
References 1. Parajuly, K., et al.: Future E-waste Scenarios 2019, pp. 1–19 (2019) 2. Ichikowitz, R., Hattingh, T.S.: Consumer E-waste Recycling in South Africa. S. Afr. J. Ind. Eng. 31(3), 44–57 (2020) 3. Mathe, M.: Management of municipal electronic waste (e-waste): a focus on environmental pollution in gwanda urban maligana mathe. J. Arch. Bus. Res. 7(6), 44–61 (2019) 4. Moletsane, R.I., Venter, C.: Electronic waste and its negative impact on human health and the environment. In: 2018 International Conference on Advances in Big Data, Computing and Data Communication Systems (icABCD). Durban, South Africa, pp. 1–7 (2018) 5. Naik, S., Satya Eswari, J.: Electrical waste management: recent advances challenges and future outlook. J. Total Environ. Res. Themes 1(2022), 1–9 (2022) 6. Ali, A.S., Akalu, Z.K.: E-waste awareness and management among people engaged in e-waste selling, collecting, dismantling, repairing, and storing activities in addis ababa Ethiopia. J. Environ. Health Insights. 16(2022), 1–8 (2022) 7. Miner, K.J., Rampedi, T.I., Ifegbesan, A.P., Machete, F.: Survey on household awareness and willingness to participate in e-waste management in jos, plateau state Nigeria. Sustain. J. 12(3), 1–16 (2020) 8. Ruiz, A.: The Roudup. Latest Global E-Waste Statistics And What They Tell Us. https://the roundup.org/global-e-waste-statistics/#top. Accessed 23 Nov 2022 9. Statista.: Projected electronic waste generation worldwide from 2019 to 2030. https://www. statista.com/statistics/1067081/generation-electronic-waste-globally-forecast/. Accessed 23 Nov 2022 10. Daum, K., Stoler, J., Grant, R.J.: Toward a more sustainable trajectory for e-waste policy: a review of a decade of e-waste research in accra, ghana. Int. J. Environ. Res. Public Health 14(135), 1–18 (2017) 11. Jamshed, S.: Qualitative research method-interviewing and observation. J. Basic Clin. Pharm. 5(4), 87–88 (2014) 12. Moletsane, R.: Managing electronic waste with recycling: a review of developing and developed regions. Intell. Algorithms Softw. Eng. 1, 215–225 (2020) 13. Oluwadamilola, A.A., Olubisi, F.O.: Ticking time bomb: implications of the COVID-19 lockdown on E-waste management in developing countries. Univ. Coll. Lond. Open Environ. J. 3(2021), 1–13 (2021) 14. Forti, V., Baldé, C.P., Kuehr, R., Bel, G.: The Global E-waste Monitor 2020: Quantities, Flows, and the Circular Economy Potential. 2020. The International Telecommunication Union, Germany (2020) 15. Van Yken, J., Boxall, N.J., Cheng, K.Y., Nikoloski, A.N., Moheimani, N.R., Kaksonen, A.H.: E-waste recycling and resource recovery: a review on technologies, barriers and enablers with a focus on oceania. J. Metals 11(8), 1–40 (2022) 16. Joon, V., Shahrawat, R., Kapahi, M.: The emerging environmental and public health problem of electronic waste in India. J. Health Pollut. 7(15), 1–7 (2017) 17. United Nations Environment Programme. UN report: Time to seize opportunity, tackle challenge of e-waste. UN Environment News and Media Unit. Switzerland (2019) 18. Shuja, J., et al.: Greening emerging IT technologies: techniques and practices. J. Internet Serv. Appl. 8(9), 1–11 (2017) 19. Rashid, D.Y., Rashid, A., Warrach, M.A., Sabir, S.S., Waseem, A.: Case study method: a step-by-step guide for business researchers. Int J Qual Methods 18(2019), 1–13 (2019) 20. Palinkas, L.A., Horwitz, S.M., Green, C.A., Wisdom, J.P., Duan, N., Hoagwood, K.: Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. J. Adm. Policy Ment. Health 42(5), 533–544 (2013)
Management of Electronic Waste from Matiki Landfill
11
21. Fylan, F.: Semi-structured interviewing. A handbook of research methods for clinical and health psychology. In: A Handbook of Research Methods for Clinical and Health Psychology, J. Miles and P. Gilbert, Editors. Oxford University Press, United Kingdom, pp. 65–78 (2005) 22. Bengtsson, M.: How to plan and perform a qualitative study using content analysis. NursingPlus Open J. 2(2016), 8–14 (2016) 23. Krippendorff, K.: Content analysis: An Introduction to its Methodology. Sage Publications Inc., Thousands Oaks, California (2004) 24. Neuendorf, K.: The content analysis guide book. Sage Publications Inc., Thousands Oaks, California (2002) 25. Saldaña, J.: The Coding Manual for Qualitative Researchers. Sage Publications Inc., Thousands Oaks, California (2009) 26. Paulus, T.M.: Sage Research Methods. https://methods.sagepub.com/reference/the-sageencyclopedia-of-educational-research-measurement-and-evaluation/i3077.xml. Accessed 05 Dec 2023 27. Sutton, J., Austin, Z.: Qualitative research: data collection, analysis, and management. Can. J. Hosp. Pharm. 68(3), 226–231 (2015) 28. Haywood, L.K., Kapwata, T., Oelofse, S., Breetzke, G., Wright, C.Y.: Waste disposal practices in low-income settlements of South Africa. Int. J. Environ. Res. Public Health 18(15), 1–12 (2021) 29. Lindhqvist, T.: Extended Producer Responsibility in Cleaner Production: Policy Principle to Promote Environmental Improvements of Product Systems. The International Institute for Industrial Environmental Economics. Lund Uiversity, Germany (2000)
Actualization of Educational Programs and Content in the Concept of Convergent Education Mikhail Deev(B)
and Alexey Finogeev
Penza State University, Penza, Russia [email protected]
Abstract. The article discusses issues and proposals for the digitalization of the process of training specialists related to the application and modernization of the convergent approach in education, which means the convergence of educational programs and resources, competency matrices and employers’ requirements, methods and technologies for teaching students of different areas and specialties in an open information and educational environment platform. A methodology for actualization convergent training programs is proposed, which includes the following steps: a) collecting, analyzing and consolidating information about the requirements of employers, the Federal State Educational Standards, competencies and levels of specialist training, requirements for educational programs and resources, b) actualization educational programs based on analytical data on requirements, coordination with the competency requirements of regional labor markets, benchmarking the requirements of standards and employers, assessing the compliance of finished training programs with substantive requirements, information on the need to synthesize new programs, c) validation of educational programs based on its assessment and adjustment, d) creating new programs taking into account the analysis of consolidated requirements, certification results and data on the availability of convergent educational materials, e) trial operation of the actualized curriculum using new educational content. The results of the study are used to synthesize a smart educational environment with mechanisms for intellectual support of learning processes based on an analysis of the conditions of the digital economy and actualization educational programs and resources. #CSOC1120. Keywords: convergence · education · smart learning environment · educational program · Synchronization
1 Introduction Digitalization and intellectualization of all spheres of human activity affects, among other things, education and is a system-forming factor in its transformation [1]. Digitalization concerns the following aspects [2]: a) the provision of educational services, b) the synthesis of electronic educational resources (EER or (electronic learning resources) ELR), © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 12–21, 2023. https://doi.org/10.1007/978-3-031-35311-6_2
Actualization of Educational Programs and Content
13
c) the creation of an information and educational environment, c) the use of distance learning and certification technologies, d) the automation of educational, organizational and administrative activities in educational institutions, e) the interaction of teachers and students, f) ensuring the information security of the subjects of the educational process, etc. The main trends in the development of education in the context of digitalization and intellectualization are the development of technologies for electronic, mobile, cloud, immersion and mixed training of specialists. Factors of influence on transformational processes in the educational environment are: • availability of information and educational resources on the Internet, • open network information interaction with people and communities on any subject, • geographical remoteness and anonymity of users in the process of learning and communication, • availability of any consultations in the process of studying on educational programs, • the possibility of hiring specialists to perform tasks of an educational nature, • application of distance learning technologies, testing and certification of trainees, • transfer of the educational process from experimental learning on real physical objects and systems towards learning on models and objects of virtual and augmented reality, • shifting the emphasis from the perception and memorization of information through verbal stimuli (textual material) to the perception of information through non-verbal stimuli (graphic and animated visualizations), • the transition from learning using “paper” educational and methodological materials to learning using electronic educational resources in the information and educational environment, • the use of information and telecommunication technologies in training in any specialty, • intellectualization of the educational process through computerization, the use of mobile technologies, computer modeling technologies, virtual and extended, hypertext and hypermedia forms of presentation of educational material, etc. An important consequence of the digitalization of education is convergent processes in the educational environment, which are associated with the above aspects. The features of the digital transformation of modern education can be considered the development and implementation of methodological and pedagogical foundations for the implementation of a convergent model within the intellectual information and educational environment [3, 4]. The convergent educational model combines knowledge, tools and ways of thinking from the natural sciences, social sciences, humanities, information and computing and engineering disciplines to form an interdisciplinary base for solving modern scientific, technical and humanitarian problems at the junction of many areas [5]. The integration of social, cognitive and information technologies in the education system also allows us to talk about a convergent model of the educational process. The convergent approach is to ensure the convergence of educational trajectories of different specialties in accordance with the converging requirements of federal educational and professional standards, as well as employers. The result is the convergence of educational programs, the creation and use of a single educational content, the use of similar teaching methods and technologies for specialists in different industries. The
14
M. Deev and A. Finogeev
model of convergent education also determines the convergence of competence models in professional and educational standards [6]. An example of the implementation of a convergent approach is the completed transition to the study of computer disciplines by specialists in all fields of knowledge, which is associated with the global penetration of information and communication technologies in all spheres of life, the development of the Internet, mobile communication systems, the introduction of remote access technologies to any information resources, etc. [7]. The introduction of the concept of convergent education requires the integration of educational resources and technologies in the information and educational environment with intelligent mechanisms for the synthesis of personalized learning paths, taking into account the requirements of employers, as well as the introduction of technologies for monitoring acquired competencies and adaptive adjustment of the process of training and retraining of training specialists in accordance with changed external factors. Such factors include, for example, changes in the requirements of federal standards, changes in regional economic systems in terms of the emergence of new enterprises and technologies, new specialties, demanded labor vacancies, changes in labor functions, etc. Specialists in the process of long-term training may need to master new competencies, which requires advanced training of teachers, actualization educational programs and educational and methodological content [8]. The actualization process is implemented in the course of managing the life cycles of educational programs and content [9]. Life cycle management tools in the process of actualization provide a dynamic response of the educational environment to the changing requirements of regional labor markets and the economic model of regional development. To improve the efficiency of the convergent model of education, a comprehensive development of mechanisms is necessary: • using mobile devices and web technologies to access educational resources and work with them in an interactive mode [10], • the use of cloud storage for placement of educational content, • monitoring the requirements of educational and professional standards, employers in the regional labor markets with the possibility of adaptive adjustment and actualization of educational programs, resources and teaching methods [11, 12], • personalized training in senior courses with the possibility of synthesizing educational trajectories, taking into account the competence requirements of specific employers [13, 14], • self-study using distance technologies and open educational resources, consultations and online testing [15], • introduction of immersion learning technologies based on virtual and augmented reality, etc.
2 Methods and Results 2.1 Methodology for Actualization Educational Programs and Content in a Convergent Educational Model Convergent education assumes the convergence of educational programs, teaching materials and electronic educational resources (educational content), pedagogical techniques and teaching methods, social and cognitive technologies for training specialists from different subject areas of knowledge.
Actualization of Educational Programs and Content
15
The convergence of educational programs and educational content for different specialties is ensured by the changing requirements of professional and educational standards, as well as employers in the modern conditions of transition to a digital and innovative economy. The transition to a convergent model entails the need to change the work programs of the studied disciplines, funds of evaluation tools, and educational content. For different specialties, general sections and topics are added in the studied disciplines, similar teaching methods and technologies are used. The convergent model for the implementation of the educational process determines the convergence of competency matrices in relation to the acquired knowledge, skills and abilities according to the needs of the real sector of the economy and employers [16]. According to the Federal State Educational Standards (FGOS), under higher education undergraduate programs, the student develops universal and general professional competencies, as well as some professional competencies. Professional competencies are divided into mandatory and recommended, which are formed taking into account the requirements of professional standards and the competence requirements of employers for graduates in the labor market. Despite the fact that competencies in professional standards are synthesized on the basis of generalization of work experience and consultations with employers, there is a problem of lagging behind these competencies from current market realities, which is especially typical in an innovative economy, when, with the advent of new materials, technologies, equipment, data competencies quickly become obsolete. The analysis of the requirements of employers for the formation of new professional competencies should be carried out on-line and, as a result, influence the actualization of educational programs during the training of specialists. At the same time, universal and general professional competencies practically do not change for a long time and can be fixed without changes in federal educational standards and training programs. According to the concept of convergent education, it is more expedient to develop the competencies of a professional plan based on the results of the search, collection and analysis of demanded vacancies in the regional labor markets. In this case, factors affecting the required competencies should be taken into account. Such a process can be defined as the process of actualization the matrices of competence requirements of specialists. The actualization of competency matrices leads to the actualization of educational programs and further to the actualization of educational content and requires synchronization of the life cycles of competency matrices, educational programs and educational content for various specialties [17]. Therefore, in order to develop a model of convergent education, it is necessary to solve the problems of actualization and synchronizing the three main components of the educational process through a set of developed methods for the intelligent management of their life cycles in the information and educational environment. The analysis of curricula from the main professional educational programs (MPEP) for various specialties created on the basis of federal state educational standards of several generations confirms the ongoing convergent processes in the field of education, since the number of similar sets of competencies for various professions is growing every year. Competency requirements on the part of employers have recently also influenced
16
M. Deev and A. Finogeev
the formation of curricula and work programs and, consequently, the actualization of educational programs and content. In particular, factors such as: • • • • •
emergence of new technologies, materials, equipment, etc., the emergence of new and the disappearance of obsolete industries and enterprises, emergence of new and disappearance of old professions and jobs, oversupply or shortage of specialists in specific industries and industries in the region, identified shortcomings and shortcomings of existing educational programs in relation to regional specifics, etc.
To obtain new competencies, specialists have to master additional educational programs of related or other specialties within their profession. For example, a doctor to work with patients remotely must master the competencies of information and telecommunication specialties. Additional programs should be synchronized and coordinated with the programs of the main specialty, adjusted taking into account the level of training of a specialist. To actualize and harmonize educational programs, taking into account the requirements of employers, a methodology is proposed that includes the following steps: 1. Collection and analysis of information about the requirements and needs of employers, the requirements for the MPEP by the Federal State Educational Standard, the competencies and level of training of a specialist, requirements for educational content, sets of ready-made electronic educational resources. The results of the stage are data for the synthesis of a new or modernization of the old MPEP. 2. Actualization the MPEP - contains the stages of analyzing data on new requirements for the education programs, harmonizing with competency requirements on the part of employers, comparative analysis of new and existing requirements on the part of standards and employers, assessing the degree of convergence of new requirements with the content of the MPEP, highlighting similar parts and new requirements, synthesis of the MPEP to take into account requirements that are not reflected in the old version. The results are consolidated data for the MPEP assessment and data for its actualization. 3. Validation (certification) of the MPEP - contains the stages of evaluation and validation of the program. The results are data on its assessment, which are used for adjustments during the actualize. 4. Synthesis of a new or modernization of the old MPEP - contains the stages of planning, development of its structure and content, taking into account new requirements and validation results. The result is a new or actualized program. 5. Implementation of the actualized MPEP - contains the stages of technology selection, planning, training, selection and use of ELR kits. The result is the implementation of the educational process. 2.2 Synthesis and Actualization of Convergent Educational Materials Educational resources and learning technologies are the main element in the development of specialist training programs. In the information and educational environment, such elements are electronic educational content (EEC) (Fig. 1.) or resources, which goes through its life cycle with the stages of development, design, operation, evolution and
Actualization of Educational Programs and Content
17
obsolescence. Educational content includes educational and methodological material in electronic form, many virtual laboratories and simulators, etc. When developing and actualization all types of EEC, all competence requirements should be taken into account, therefore, the life cycles of competence requirements, educational programs and content are connected.
Fig. 1. Education content
Convergent education implies the convergence of educational programs, electronic resources, teaching materials, pedagogical techniques and teaching methods, social and cognitive technologies for training specialists in different subject areas of knowledge. The convergence of educational programs, educational and methodological materials and ELR of different specialties is ensured by the requirements of professional and educational standards, as well as employers. For example, professions in most areas of knowledge today require competencies in the field of information, telecommunications and computing technologies. The convergence of educational programs entails the need to change the work programs of the studied disciplines, funds of evaluation tools, educational and methodological content. For different specialties, common sections and topics are added, similar teaching methods and learning technologies are used. In fact, a convergent approach to the implementation of the educational process predetermines the convergence of competency matrices in relation to the acquired knowledge, skills and abilities in accordance with the needs of the real sector of the economy and employers. The initial data for the selection and consolidation of EEC for training in a convergent training program are: • information about the content of the consolidated (convergent) MPEP, • electronic educational resources and methodological materials available according to the MPEP integrated into the converged MPEP, • consolidated competence matrix of a sought-after specialist, • content and software and technical requirements for EEC that affect the synthesis and operation of content,
18
M. Deev and A. Finogeev
• funds of evaluation tools for assessing the effectiveness of the EEC in the process of mastering the competencies of a sought-after specialist, which serves as the basis for making decisions on the need to actualize and modernize the content or its inconsistency, • learning technologies using the selected EEC available to the educational institution, which include methods, techniques, technical and software training tools, including virtual and augmented reality tools and technologies. The output data is a synthesized EEC and proven technologies for working with electronic educational resources, updated funds of evaluation tools, support and content management tools. After design, development, testing, verification and publication, the content is used in specific MPEPs. When actualization the competency matrices and MPEP, the process of synchronization and content coordination is carried out [18]. The convergent model of the EEC can also be represented as a fuzzy weighted graph, in which the vertices are the sets of topics and sections of the electronic educational resources. Since different ELRs can have parts with identical content, they can be represented as isomorphic subgraphs. Thus, the content isomorphism shows the identity of the parts of the ELR, which can be studied in different disciplines of the same specialties, and in the same disciplines of different specialties in educational institutions [19]. The analysis of educational resources showed that sections that are often similar in context are semantically different from each other. Then it is advisable to represent them as vertices connected by edges with weights in the form of a fuzzy membership function in the range [0, 1], which reflects the degree of similarity of fragments of different programs. In this case, completely identical fragments have a degenerate membership function equal to 1. The task of searching and highlighting topics and sections of ELR disciplines that are similar in context in this case is a solution to the problem of identifying fuzzy isomorphic subgraphs and assessing the degree of their similarity (convergence). The degree of convergence of learning resources is calculated as an estimate of the degree of isomorphism of fuzzy subgraphs based on fuzzy cliques [20]. Since completely identical or partially identical fragments of the work programs of disciplines should be further combined, it is also advisable to reduce the problem of identifying fuzzy isomorphic subgraphs to the problem of clustering a graph using the Louvain method, as in the case of identifying clusters of competence requirements. To reduce the dimension of the problem, similarly, edges with weights less than 0.5 are excluded from the graph model, i.e. fragments with a degree of similarity of less than 50% are not read identical. During the operation of the algorithm, similar fragments are placed in one cluster with the calculation of changes in the modularity coefficients and the choice of the maximum changes and then combined. During the selection of isomorphic subgraphs, the procedure for synthesizing a new EEC is performed simultaneously with the integration of similar fragments and the assessment of the degree of convergence of educational resources. Thus, the process of determining fuzzy isomorphic subgraphs or clusters of similar ELR fragments is a formalized procedure for synthesizing a convergent into a single EEC in the information and educational environment.
Actualization of Educational Programs and Content
19
3 Conclusion The digitalization of aspects of human life and the intellectualization of the production sector during the 4th industrial revolution lead to a transition to a convergent model of education. A feature of the convergent approach is the possibility of adaptive adjustment to the changing requirements of federal professional and educational standards and employers. To implement this approach, it is necessary to develop mechanisms for managing the processes of actualization educational programs and educational content with dynamically changing requirements of professional and educational standards, employers in the regional labor markets. In the course of the research, a methodology for actualization and methods for managing the processes of synchronizing educational programs, electronic educational resources and employers’ requirements were developed. The methods are implemented in the form of tools for managing learning processes in the information and educational environment. Data for actualization is extracted by monitoring the regional labor market, which includes the search, collection and processing of information about popular vacancies. Further, intellectual analysis technologies are implemented to identify actual changes in the required competencies in order to modernize educational programs and resources. The results of the study are used to synthesize an information and educational environment with intelligent mechanisms for supporting learning processes based on an analysis of the requirements of labor markets and actualization educational content and training programs, as well as adaptive customization and personalization of specialist training trajectories. The architecture of the educational environment includes a set of software and tools for managing the components of the information space of the university with support for mobile, cloud and blended learning technologies. An approach to the synthesis of mechanisms for managing convergence processes in open educational environments has been developed. To collect and consolidate the initial data, a method for monitoring regional labor markets and educational services has been implemented. Regional labor market monitoring is premised on the automated data collection and consolidation on regional vacancies, and data extraction on soughtafter competencies of specialists. Technologies for intellectual analysis of the obtained data to model changes in the required competencies in order to actualize and synchronize educational programs and resources and to adapt personalized training trajectories for specialists have been applied. Personalized learning in a convergent educational environment minimizes risks of training useless specialists or poor-quality professional preparation for the regional labor market. Funding. The research results were obtained with the financial support of the Russian Science Foundation (RSF) and the Penza region (projects No. 22–21-20100).
20
M. Deev and A. Finogeev
References 1. Finogeev, A., Kravets, A., Deev, M., Bershadsky, A., Gamidullaeva, L.: Life-cycle management of educational programs and resources in a smart learning environment. Smart Learn. Environ. 5(1), 1–14 (2018). https://doi.org/10.1186/s40561-018-0055-0 2. Pálsdóttir, A., Jóhannsdóttir, L.: Key competencies for sustainability in university of Iceland curriculum. Sustainability 13(16), 8945 (2021). https://doi.org/10.3390/su13168945 3. Kim, J., Kim, J.: Development and application of art based STEAM education program using educational robot. In: I. Management Association (Ed.), Robotic Systems: Concepts, Methodologies, Tools, and Applications, pp. 1675–1687. IGI Global (2020). https://doi.org/ 10.4018/978-1-7998-1754-3.ch080 4. Schmalz, D.L., Janke, M.C., Payne, L.L.: Multi-, inter-, and transdisciplinary research: leisure studies past, present, and future. J. Leis. Res. 50(5), 389–393 (2019). https://doi.org/10.1080/ 00222216.2019.1647751 5. Herr, D.J.C., et al.: Convergence education—an international perspective. J. Nanopart. Res. 21(11), 1–6 (2019). https://doi.org/10.1007/s11051-019-4638-7 6. Soltanpoor, R., Yavari, A.: CoALA: Contextualization framework for smart learning analytics. In: Proceedings of the IEEE 37th International Conference on Distributed Computing Systems Workshops, ICDCSW 2017, Atlanta, United States, pp. 226–231 5–8 June 2017. https://doi. org/10.1109/ICDCSW.2017.58 7. Lister, P.J.: A smarter knowledge commons for smart learning. Smart Learn. Environ. 5(1), 1–15 (2018). https://doi.org/10.1186/s40561-018-0056-z 8. Vesin, B., Mangaroska, K., Giannakos, M.: Learning in smart environments: user-centered design and analytics of an adaptive learning system. Smart Learn. Environ. 5(1), 1–21 (2018). https://doi.org/10.1186/s40561-018-0071-0 9. Bershtein, L.S., Bozhenyuk, A.V.: Estimation of isomorphism degree of fussy graphs. In: M. Wagenknecht, R. Hampel (Eds.), Proceedings of the 3rd Conference of the European Society for Fuzzy Logic and Technology EUSFLAT 2003, pp. 781–784. Zittau, Germany (2003) 10. Robert, I.V.: Convergent education: origins and prospects. Russ. J. Soc. Sci. Humanit. 2(32), 64–76 (2018). https://doi.org/10.17238/issn1998-5320.2018.32.64 11. Deev, M., Gamidulaeva, L., Finogeev, A., Finogeev, A., Vasin, S.: Sustainable educational ecosystems: bridging the gap between educational programs and in demand market skills. E3S Web Conf. 208, 09025 (2020). https://doi.org/10.1051/e3sconf/202020809025 12. Smitsman, A., Laszlo, A., Luksha, P.: Evolutionary learning ecosystems for thrivable futures: crafting and curating the conditions for future-fiteducation. World Futures 76(4), 214–239 (2020). https://doi.org/10.1080/02604027.2020.1740075 13. Tetzlaff, L., Schmiedek, F., Brod, G.: Developing personalized education: a dynamic framework. Educ. Psychol. Rev. 33(3), 863–882 (2020). https://doi.org/10.1007/s10648-020-095 70-w 14. Bartolomé, A., Castañeda, L., Adell, J.: Personalisation in educational technology: the absence of underlying pedagogies. Int. J. Educ. Technol. High. Educ. 15(1), 1–17 (2018). https://doi. org/10.1186/s41239-018-0095-0 15. Álvarez, I., Etxeberria, P., Alberdi, E., Pérez-Acebo, H., Eguia, I., García, M.J.: Sustainable civil engineering: incorporating sustainable development goals in higher education Curricula. Sustainability 13(16), 8967 (2021). https://doi.org/10.3390/su13168967 16. Bourn, D., Soysal, N.: Transformative learning and pedagogical approaches in education for sustainable development: are initial teacher education programmes in England and Turkey ready for creating agents of change for sustainability? Sustainability 13(16), 8973 (2021). https://doi.org/10.3390/su13168973
Actualization of Educational Programs and Content
21
17. Gros, B.: The design of smart educational environments. Smart Learn. Environ. 2016(3), 15 (2016). https://doi.org/10.1186/s40561-016-0039-x6) 18. Huda, M., Haron, Z., Ripin, M.N., Hehsan, A. Yacob, A.C.: Exploring innovative learning environment (ILE): big data Era. Int. J. Appl. Eng. Res. 12(17), 6678–6685 (2017) 19. Finogeev, A., Gamidullaeva, L., Bershadsky, A., Fionova, L., Deev, M., Finogeev, A.: Convergent approach to synthesis of the information learning environment for higher education. Educ. Inf. Technol. 25(1), 11–30 (2019). https://doi.org/10.1007/s10639-019-09903-5 20. Deev, M., Gamidullaeva, L., Finogeev, A., Finogeev, A., Vasin, S.: The convergence model of education for sustainability in the transition to digital economy. Sustainability 13, 11441 (2021). https://doi.org/10.3390/su132011441
Iterative Morphism Trees: The Basic Definitions and the Construction Algorithm Boris Melnikov1(B)
and Aleksandra Melnikova2
1 Shenzhen MSU – BIT University, Dayun New Town, Longgang, 1, International University
Park Road, Shenzhen 518172, China [email protected], [email protected] 2 National Research Nuclear University “MEPhI”, 31 Kashirskoe Shosse, Moscow 115409, Russia
Abstract. We consider an important binary relation on the set of formal languages, primarily on the set of iterations of nonempty finite languages, so-called the equivalence relation at infinity. Firstly, we consider examples of the application of this relation in some fields of the theory of formal languages and discrete mathematics. To simplify the consideration of equivalence at infinity we formulate a simpler binary relation over the set of languages, i.e. the covering relation, the double use of which is equivalent to the application of the equivalence relation at infinity. Next, we consider an algorithm for verifying the fulfillment of the coverage relation, and then we define auxiliary objects used both for proving the correctness of this algorithm and for other problems in the theory of formal languages. In the second part of this paper, we give the corresponding computer program, consider examples of its operation for specific input languages; after that, we formulate the definitions of the objects associated with them, in particular, the definition of infinite trees of the coverage relation. With their help, we prove the correctness of the algorithm for checking the fulfillment of the coverage relation, and also estimate the complexity of this algorithm. #CSOC1120. Keywords: Nondeterministic Finite Automata · Regular Languages · Binary Relation · Equivalence Relation · Morphism · Infinite Trees
1 Introduction We consider binary relationship coating to infinity (we usually write ⊂) and equivalence ∞
˜ in fact, these relations can be considered for all infinite languages, to infinity (write ∞); but we usually (and, in particular, in this article) use them for the languages of the form A* only, where A is some non-empty finite language. The ⊂ relation for languages A* ∞
and B* is fulfilled in the case when for any word of language A*, there exists some word of language B*, such that the first word is a prefix of the second one (possibly not the ˜ relation is fulfilled when, in addition to this, the “mirror” condition proper prefix). The ∞ holds (i.e., with the mutual exchange of A and B); it is obviously an equivalence relation. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 22–30, 2023. https://doi.org/10.1007/978-3-031-35311-6_3
Iterative Morphism Trees
23
Therefore, the main subject of the paper is not a validation algorithm to perform binary relations cover in infinity (specifically, the condition A*⊂ B * for given of non∞
empty finite languages A and B), but the beginning of consideration of relating this algorithm to some auxiliary facilities, in particular for infinite trees. We describe the relationship of such trees with an algorithm for checking the coverage relation and its proof, as well as get an estimate of the complexity of this algorithm. Now we shall briefly consider some reasons for our interest in considering the algorithm for checking the coverage relation; some of them are related to our previous publications, and some are not. We shall consider examples. • of the application of both this relation and its special case, i.e. the relation of equivalence at infinity, • and both examples of the need to fulfill them and examples of their use in some fields of the theory of formal languages, discrete mathematics and abstract algebra. For obvious reasons, we are starting a “discussion motivations” from our publications (they are closer to the topic discussed here), and then we shall look at the related works of other authors. ˜ itself and some of its properties were first considered in [1], but long The relation ∞ before, in [2], one of the special cases of this relation was considered (it can be called the prefix case). In addition, the application of this special case in solving (also special cases) the equivariance problem of unambiguous bracket grammars was considered, which is necessary in some tasks useful in compiler automation systems. Note that in general, the equivalence problem of bracket grammars is, of course, not immediately solvable – which is well shown, for example, for ages in [3]. At the same time, it is with the help of simple bracket grammatics in [3] that the insolvability of the equivalence problem for the entire class of context-free languages is proved. But this subclass has important subclasses - unambiguous and, as a subclass after it, deterministic CF-languages. As noted in one of our previous publications, the author proofs of the solvability of the equality problem of deterministic languages (in other words, the solvability of the equivalence problem of deterministic store automata) was announced by J. Sénizergues [4], despite the fact that so far no one has found errors either in the proof given in the series of articles by L. Staneviciene [5], or in the almost completed proof by V. Meitus [6]. However, the solvability of the equivalence problem in the "intermediate" class – the class of unambiguous X-languages – is still not known nothing. One of the reasons for this is this: unlike determinacy, there is no clear criterion of unambiguity, and possible formulations of the definition of unambiguity are not constructive. Thus, after [1] we continued to publish articles no longer according to the criteria ˜ , and according to the application of this relation (and for fulfilling the relationship ∞, these criteria); the application options themselves can be conditionally divided into two approximately equal parts, “algebraic” and “formal language” ones.
24
B. Melnikov and A. Melnikova
2 Preliminaries This section discusses the main designations and conventions for their use; however, it should be noted that some of these designations are non-standard. Close to the one used in [7], we shall use such conventions about alphabets: • is the “main” alphabet, its letters are a, b, etc., sometimes with indexes; • is the “auxiliary” alphabet, its letters are either 0 (not always), 1, 2, etc., or d 1 , d 2 ,…, d n . Also according to [7] we give a detailed definition of morphism. They be called the function of the form h : ∗ → ∗ , in which each letter of any word above the alphabet is replaced by some predefined word over the alphabet . We specifically define a specific morphism hA (for hanging from a given finite language A ⊆ *). For a finite language A = u1 , u2 , . . . , un we shall consider a specific alphabet = d1 , d2 , . . . , dn ; the corresponding morphism hA is given by the following definition: hA (di ) = ui for all i ∈ 1, . . . , n. Below for the morphisms of the above we shall use (explicitly and implicitly) the function is the prototype of h–1 is the so-called inverse morphism. In one of the previous publications, we noted that an inverse morphism is not, generally speaking, an unambiguous function; in particular, there are words from * that do not have a prototype. However, it is possible to consider the prototype of a certain word as a language as well as the prototype of a language as a union of the prototypes of all its words – that is, such an approach is correct. This is formulated in more detail as follows. Despite the fact that for any u ∈ ∗ the entry h(u) according to [7] defines a word, for any v ∈ ∗ the entry h−1 (v) defines a set of words (language) over the alphabet : h−1 (v) = { u ∈ ∗ |h(u) = v}; at the same time, the defined set can also be empty. A language h−1 (B) for any B ⊆ ∗ is defined as follows: h−1 (B) = ∪v∈B h−1 (v) (and can also be empty). Let’s introduce a few more notations (some of them are non–standard), used later in this article.
Iterative Morphism Trees
25
If v = a1 a2 . . . an , and for some m ≤ n the following condition holds: u = a1 a2 . . . am . (assuming the possibility of m = 0, i.e. u = e), then u is called the prefix of the word v; his is written condition u ∈ pref(v). Thus, pref(v) is the set of all prefixes of the word v. If m < n(i.e.u = v), the prefix is called proper; this fact will be written as follows: condition u ∈ opref(v). Prefixes of a given length k ≥ 0 of the same word u will denote the pref k (u). The binary relation ⊂, which we have already begun to discuss above, for the ∞
languages A* and B* is strictly defined as follows: A∗ ⊂ B∗ ⇔ ∀u ∈ A∗ ∃v ∈ B∗ (u ∈ pref(v)). ∞
˜ The dual application of the relation ⊂ gives the relation of equivalence ∞: ∞
∗ ∗ ˜ A ∞B ⇔ A ⊂B & B ⊂A . ∗
∗
∗
∗
∞
∞
In [1], it is shown that instead of these relations one can consider inclusion (equality) of iterations: ˜ ∗ ⇔ Aω = Bω . A∗ ⊂ B∗ ⇔ Aω ⊆ Bω and A∗ ∞B ∞
We also note that the ω-languages were considered in [8], however the ω-iterations of finite languages have not yet been considered. P(A) is the Boolean (the degree) of a set A, i.e. the set of all its subsets. Lmax – is the maximum of the word lengths of the finite language L. Some more designations will be introduced as needed.
3 The Algorithm for Constructing the Iterative Morphism Tree Thus, let us describe a very important auxiliary algorithm, i.e., the algorithm of verification A∗ ⊂ B∗ . ∞
As we have already noted in the introduction, in this and several subsequent sections we actually repeat the algorithm given in [9]. But at the same time: • slightly improve the wording and proofs of auxiliary statements;
26
B. Melnikov and A. Melnikova
• correct the typos noticed; • we present figures illustrating the algorithm (without which, apparently, it is much more difficult to understand the algorithm); • we present a computer program corresponding to the algorithm – which should be considered as a comment on the algorithm. When describing the algorithm, we shall use notation := (“set”) and :+ (“add”). These notations are used for all variables used in the algorithm, i.e., words variable, sets variable and functions variable.
Fig. 1. Abstract morphism tree.
Algorithm 1. Input: Languages A = {u1 , u2 , . . ., un } and B. Output: 1, if A∗ ⊂ B∗ ; ∞
0, otherwise. Some notes to the description of the algorithm. Let’s introduce an auxiliary alphabet. A = {d1 , d2 , ..., dn }. The words in this alphabet and the languages above it will usually have a subscript . Function F : P ∗ → P ∗ is defined as follows: his is written condition F(C) = v ∈ opref(B)| ∃u ∈ B∗ (uv ∈ C) . Below will be used: • the variables-sets L, H of the following form: L ⊆ ∗ , H ⊆ P ∗ ;
Iterative Morphism Trees
27
• the variables-function p, s (not everywhere defined) are the following: p ⊆ ∗ × P ∗ , s ⊆ ∗ × ∗ ; the functions, as can be seen from this entry, are specified by specifying a set of pairs (argument – value) ; • and also the variable word w ∈ ∗ . If the algorithm gives the answer 0, then we shall fix the element w ∈ ∗ ; i.e. (Fig. 3), / pref B∗ . hA (w ) ∈
Fig. 2. Selecting the word u at step 1 of the algorithm and modifying the corresponding language p(u ) for further actions.
Fig. 3. Variants of word continuations in step 2 of the algorithm.
Description. Step 0. L := ∅, H := { {e}}, p (e) := {e}, function p(u ) in case of u = e and function s(u ) for any u not defined.
28
B. Melnikov and A. Melnikova
Step 1. Choose a word from such u ∈ *, such that the value of p(u ) is defined, but not defined p(u d i ) for all i ∈ 1,…, n. Step 2. For all i ∈ 1,…, n, perform the following actions (we shall denote pi = p(u d i ) here). Sub-steps of step 2: 2.1. pi := F ( p(u )ui ). 2.2. If (∃C ∈ H) (C ⊆ pi ), then L :+ { u d i } and s(u ) := p−1 (C), otherwise H :+ pi and s(u d i ) := u d i . 2.3. If pi = ∅, then output with the answer 0 and the fixed element u d i . Step 3. w := u . Step 4. If (∀i ∈ 1,…, n) (wd i ∈ L), then L :+ { w}, otherwise go to step 1. Step 5. If w = e, then exit from the algorithm with the answer 1. Step 6. w := pref |w| – 1 (w) (i.e. we “cut” one letter from the word w to the right) and go to step 4.
Fig. 4. The results of the algorithm (the beginning).
4 Auxiliary Definitions and Objects Necessary to Prove the Correctness of the Algorithm Suppose that to some finite sets A, B ⊂ ∗ Algorithm 1 has been applied. We shall use all the designations found in its description and in the following text. Consider also the functions S ⊆ ∗ × ∗ , P ⊆ ∗ × P ∗ , set as follows: S(e) = e, ∀u ∈ ∗ (∀d ∈ )(S(u d ) = s(S(u )d )); ∀u ∈ ∗ (P(u ) = p(S(u ))).
Let us explain the meaning of the objects introduced by us (variable sets, variable functions, etc.) – and not only those described in algorithm 1, but also the functions S and P. To do this, first consider the morphism tree h : ∗ → A∗ ,
Iterative Morphism Trees
29
that is, an infinite oriented tree, from each vertex of which arcs come out, marked with words of the language A (see Fig. 1). This tree is somewhat similar to the tree of a finite automaton, considered in [10]. We say that the word u ∈ ∗ is covered, if it is a prefix of any word from B*. The word itself w = v1 . . . vm (where vi ∈ B for all i ∈ 1,…, m) in this case, we call the cover of the word u, if u ∈ pref(w), but u ∈ / pref(v1 . . . vm−1 ). For some u, all the words v from the set opref (B), such that the words uv are its covers, denote F({u}) (this notation is used due to the fact that the domain of the definition of the function F is not *, but P ()). For a free set C, we put F(C) = ∪v∈C F({u}) Below we shall obtain that the condition A∗ ⊆ B∗ is executed if and only if all words from A∗ are covered. However, we cannot check whether the entire infinite set of words over the alphabet is covered, so the algorithm implicitly uses a special equivalence relation given on ∗ (i.e., on the vertices of the morphism tree we have considered): two words u, v ∈ ∗ are equivalent in this relation if F({u}) = F({v}). It follows from the previous one that for each equivalence class in this relation, it is enough to check whether any one word from the class under consideration is covered – this is done in the algorithm. Let us continue the description of the notation. L is the set of those words from ∗, for which it has already been checked whether the words are covered by hA (L). H – is the collection of sets F({u}) for all u ∈ hA (L). The equality s(u ) = v means that the words hA (u )) and hA (v ) belong to the same equivalence class according to the relation described above. In p(u ) we write some set containing F(hA (u )) as a subset, and such that it is itself the value of the function F(hA (v ))
30
B. Melnikov and A. Melnikova
for some other word v ∈ ∗. Below, for the algorithm of the word u considered at some point, we shall sometimes use the informal name of the “set of tails” for the language p(u ): • in Fig. 2, these are words marked with “lying” curly brackets; at the same time, the beginnings of these words are marked there with green circles4 • the same is shown in Fig. 4 (the beginning of the algorithm): there are languages (sets) shown in blue. The capital letters S and P denote functions analogous to those denoted by the corresponding lowercase letters, but defined wherever possible, and not only for the words u processed by the described algorithm. And, of course, in the course of the algorithm’s operation, for all possible u ∈ ∗, the functions S and P are not constructed, but they are given evidence of their existence and the possibility of constructing their values for any particular u ∈ ∗, such that the word h(u ) is covered.
5 Conclusion In the near future, we expect to publish a continuation article in which we will consider the end of the proof of the correctness of the algorithm, as well as some new aspects of its application in problems of the theory of formal languages. Also this future paper is supposed to include the corresponding computer program, which was realised in C++.
References 1. Melnikov, B.: The equality condition for infinite catenations of two sets of finite words. Int. J. Found. Comput. Sci. 4(3), 267–274 (1993) 2. Mel’nikov, B.: Some consequences of equivalence condition of unambiguous peranthesis grammar. Bull. Moscow Univ. 10, 51–53 (1991) 3. Aho, A.V., Ullman, J.D.: The Theory of Parsing, Translation, and Compiling, vol. 1. Parsing. Prentice-Hall, Inc., Upper Saddle River (1972) 4. Sénizergues, G.: L(A) = L(B)? decidability results from complete formal systems. Theoret. Comput. Sci. 251(1–2), 1–166 (2001) 5. Staneviciené, L.: About a tool for the study of context-free languages. Cybernetics 4, 135–136 (1989) 6. Staneviciené, L.: D-graphs in contextfree language theory. Inf. (Lithuanian Acad. Sci. Ed.) 8(1), 43–56 (1997) 7. Salomaa, A.: Jewels of Formal Language Theory. Computer Science Press, Rockville (1981) 8. Melnikov, B., Melnikova, A.: Some more on omega-finite automata and omegaregular languages. Part I: the main definitions and properties. Int. J. Open Inf. Technol. 8(8), 1–7 (2020) 9. Melnikov, B.: Algorithm for checking the equality of infinite iterations of finite languages. Bull. Moscow Univ. Ser. 15 “Comput. Math. Cybern.” 4, 49–54 (1996) 10. Lallement, G.: Semigroups and Combinatorial Applications. Wiley & Sons Inc., Hoboken (1979)
Special Theoretical Issues of Practical Algorithms for Communication Network Defragmentation Julia Terentyeva1
and Boris Melnikov2(B)
1 Center of Information Technologies and Systems for Executive Power Authorities, (Federal
State Autonomous Research Institution), 19, Presnensky Val, Moscow 123557, Russian Federation 2 Shenzhen MSU – BIT University, 1, International University Park Road, Dayun New Town, Longgang District, Shenzhen 518172, China [email protected]
Abstract. The paper considers the issues concerning the recovery of connectivity of a communication network in the case of its fragmentation with a certain optimality criterion. The total length of the added edges, whose weight is given by the metric in space and thus determines the length of the edge, is taken as the criterion for the optimality of the cost of recovery. Such questions have both purely engineering applications, and classical applications, affecting the basics of graph theory. Namely, the problem of communication network defragmentation, which arose as a result of both internal and external possible destabilizing factors, can be solved by constructing a spanning tree graph of the communication network in question. Programming by the authors of practical algorithms for restoring connectivity of communication network, including also classical algorithms (for example, Kruskal’s algorithm for building a spanning tree), led to the question about the uniqueness of the solution for restoring connectivity in the case of metric variant of the problem and if it is impossible to form new vertices (communication nodes). Experimentally the invariance of the starting node in the work of the algorithm was confirmed. The hypothesis of sufficient conditions of uniqueness of solution of the problem of restoration of connectivity of communication network was stated as well as the hypothesis of sufficient conditions of invariance of a starting vertex. Such a problem is reduced to the proof of sufficiency conditions of uniqueness of construction of minimal spanning tree in the case of a graph with metric weight function. For the proof the notions of graph families and graph partitioning into families were introduced. This proof forms the substantive part of this paper. The considered property of sufficiency under which the uniqueness of the solution of the communication network defragmentation problem is provided, can be useful in designing real networks. #CSOC1120. Keywords: Communication Network · Communication Network Modeling · Greedy Algorithm · Spanning Tree
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 31–38, 2023. https://doi.org/10.1007/978-3-031-35311-6_4
32
J. Terentyeva and B. Melnikov
1 Introduction When designing and operating communication networks in the part of the solution of the problem of connectivity recovery often there is a question about the uniqueness of the optimal solution, which is obtained by minimizing the target function of the sum of weights of the topological graph’s edges. As a rule, under the weight of an edge we take the distance between the vertices of the graph, which is a mathematical model of communication network topology. The authors have already considered the situation of construction of optimal communication network topology [1–3] with development of corresponding software. At the same time, of practical importance is the question of the uniqueness of the optimal solution. In particular, in [1] an algorithm for constructing a connected graph based on the initial graph with minimal sum of weights of the added edges was proposed. Let us call it “Algorithm A”. In this algorithm we start to consider some starting vertex of graph which initiates a spanning tree. At each step we join the nearest vertex (forming the corresponding edge) which belongs to another connectivity component to the currently formed spanning tree. Naturally the question arose as to how the choice of initial vertex affects the result of the algorithm. Under what conditions does the choice of initial vertex have no effect on the optimal solution? If such conditions exist, how realistic are they for real communication networks? These questions are considered in detail in the present paper, the conditions of uniqueness sufficiency of the optimal spanning tree are revealed (generally speaking, the history of the spanning tree problem is considered in detail in [4–7]). A detailed proof of sufficiency of invariance of the starting vertex of “Algorithm A”, based on newly introduced notions of graph families and graph partitioning into families, is presented.
2 Problem Statement. Definitions As we know [7], in the general case according to Kirchhoff theorem more than one spanning tree in connected graph can exist. To find/build a spanning tree of minimal weight the algorithms of Kraskal, Prim [7], and their modifications can be used. Let G = < V , E > be a graph defined by a set of vertices V = {v1 , v2 , ..vn } and a set of edges
where v1 ∈ V , v2 ∈ V . Let
E = {(v1 , v2 )i }zi=1
(1)
Ni (i) Si = vj
(2)
j=1
denote the subset of vertices of the graph. Let us divide the set of vertices V into k non-intersecting subsets (k ∈ N ). That is V =
k i=1
Si
(3)
Special Theoretical Issues of Practical Algorithms
33
We define the distance function between the sets Si and Sj (i ∈ [1, ..k], j ∈ [1, ..k]) as (j)
ρ(Si , Sj ) = min ρ(v1(i) , v2 ) (i) v1 ∈Si (j) v2 ∈Sj
(4)
where ρ(v1 , v2 ) is the weight of the edge connecting vertices v1 ∈ V , v2 ∈ V . Note that we will consider the weight of an edge as the distance between points in space that define vertices of the graph of the communication network. In other words, the weight of the edge will be given by the metric in space. Let us introduce the following definition. Subsets S1 ⊆ V and S2 ⊆ V form a family nD Si if for any pair of S1 ∈ D and S2 ∈ D the following conditions are satisfied: D= i=1
1) for S 1 the following holds S2 = arg min ρ(S1 , Si ) Si ⊆V i=1 i∈[1,..k]
(5)
This means that for a fixed value of the argument S 1 , the function ρ(S 1 , x) reaches its minimum at x = S 2 . 2) for S 2 the following holds S1 = arg min ρ(S2 , Si ) Si ⊆V i=2 i∈[1,..k]
(6)
That is, it means that for a fixed value of the argument S 2 , the function ρ(x, S 2 ) reaches its minimum at x = S 1 . In other words, S1 ∈ D and S2 ∈ D are mutual closest subsets. In the simplest case families are formed from vertices of the original set V. As a special case, each vertex of the set V can represent a subset. Figure 1 shows an example of family formation. The circles denote vertices, the union of vertices by rules (5), (6) form families circled by a closed line. Note that a connected subgraph can be identified with a vertex in the above definitions. And then the procedure of family formation also takes place, here the distance function between connected subgraphs corresponds to (4).
34
J. Terentyeva and B. Melnikov
Fig. 1. An example of graph vertices partition into families.
3 Properties and Proofs Let us prove the following statement, which concerns the peculiarity of graph partitioning into families in the framework of the introduced notations. nD Si has power either 1 or 2 if for any two vertices Statement 1. Any family D = i=1
v1 ∈ V and v2 ∈ V and distance ρ(v1 , v2 ) is unique. Proof. Let us introduce an indicator function. 1, if S2 = arg min ρ(S1 , Si ) Si ⊆V i=1 Ind (S1 , S2 ) = i∈[1,..k] 0, otherwise
(7)
Let us compose for clarity Table 1, in which we consider subsets of V, from which families can be formed. The elements of Table 1 are the values of the indicator function (4) on the subsets corresponding to the rows and columns. Table 1 shows that one row can contain exactly one unit, which corresponds to the situation that there is exactly one closest vertex for any vertex in the set of vertices, because all distances between vertices are unique. The column can contain any number of units (limited by the table dimension). Thus, in the case when Ind (Si , Sj ) = Ind (Sj , Si )
(8)
D = Si ∪ Sj
(9)
we get
Special Theoretical Issues of Practical Algorithms
35
and, respectively, |D| = 2
(10)
Ind (Si , Sj ) + Ind (Sj , Si ) = 1
(11)
D = Si
(12)
|D| = 1
(13)
In the case where
we obtain
and, accordingly, the power of D is
The statement is proved. Table 1. Indicator function Ind(S 1 , S 2 ). S1 /S2
i
…
j
…
m
i
-
… j
1
… m
1 -
1
-
Consider a complete weighted graph (every two vertices are connected by one edge). Prove the following statement. Statement 2. If in a complete weighted graph all weights of all edges are unique, then the partitioning of the graph into families is unique up to enumeration of families. Proof. We will show this by proof by contradiction. According to statement 1 a family can contain either one or two vertices. Suppose a vertex v enters a family S 1 (if there is another vertex in this family, let us denote it by w) and can enter a family S 2 (if
36
J. Terentyeva and B. Melnikov
there is another vertex in this family, let us denote it by z) under a different partition. Consider the following possible cases: 1) v ∈ S1 , |S 1 | = 1, v ∈ S2 , |S 2 | = 2 That is S1 = {v}, S2 = {v, z}. However, a vertex v cannot be the nearest vertex to a vertex z and not the nearest vertex to any vertex at the same time. This case leads to a contradiction. 2) v ∈ S1 , |S 1 | = 2, v ∈ S2 , |S 2 | = 2 That is S1 = {v, w}, S2 = {v, z}. Then we obtain that there are two nearest vertices w and z to a vertex v. However this is impossible since all weights in the graph under consideration are different. This also leads to a contradiction. 3) v ∈ S1 , |S 1 | = 2, v ∈ S2 , |S 2 | = 1 That is S1 = {v, w}, S2 = {v}. However, a vertex v cannot be the closest vertex to a vertex w and not the closest vertex to any vertex at the same time. This also leads to a contradiction. 4) v ∈ S1 , |S 1 | = 1, v ∈ S2 , |S 2 | = 1 That is S1 = {v}, S2 = {v}. These are identical partitions. The analysis of the four possible cases proves that the partitioning of the set of vertices of a graph into subsets called families (2–3) is unique in a complete weighted graph with different weights. Statement 2 is proved. Let us return to the construction of a connected graph on the basis of the given initial graph G =< V , E > with minimal sum of weights of the added edges. Let us perform a series of operations with the given graph on the basis of the introduced notations Algorithm B. Algorithm B Step 1. Consider all connected components of the original graph as vertices of the new graph G1. The distance between vertices of the new graph is defined according to (1). Assume k = 1. Step 2. Divide the graph G1 into families. Step 3. For families S = {v, w} with |S| = 2, form an edge that connects v and w. If the elements of the family are complex, go to Step 4. Otherwise go to Step 5. ∈ v and , w ) among vertices v , v , ..v Step 4. Choose a pair (v i j 1 2 Nv w1 , w2 , ..wNw ∈ w, such that ρ(vi , wj ) is minimally and vi ∈ v, wj ∈ w (i ∈ [1, ..Nv ], j ∈ [1, ..Nw ]). The pair (vi , wj ) forms an edge. Step 5. Merge v and w into one vertex. The new graph is Gk+1 , k: = k + 1. Step 6. If the current graph Gk contains exactly one vertex then stop. Otherwise go to step 2. The steps of Algorithm B indicate that this algorithm belongs to the class of greedy algorithms. The given algorithm is built on a certain graph structure called family, see (5–6). And also the algorithm uses proven properties of graph partitioning into families. These conducted procedures with the original graph (Algorithm B) is a greedy algorithm stated in terms of graph partitioning into families.
Special Theoretical Issues of Practical Algorithms
37
The following statement is true. Statement 3. When the pairwise weights (1) defined by the metric function in space between all non-invariant vertices of the original graph are different, Algorithm B gives the only optimal solution. Proof. The proof relies on Statement 2 on uniqueness of graph partitioning into families. Let us perform operation 1 of Algorithm B. Then the obtained graph consisting of single vertices is expanded to a complete graph. Since the pairwise weights between all non-incidental vertices of the original graph are different, this will mean different weights between vertices in the resulting graph. The uniqueness of partitioning into families in this case, following from statement 2, means the necessary existence/construction of a single-valued edge that will connect the components of one family (see step 3 of Algorithm B). In further iterations, singularity is also reduced to the application of the fact of statement 2. Thus the procedure of Algorithm B gives the only optimal solution for construction of a spanning tree (minimal spanning tree) under condition of different weights between non-invariant vertices of the initial graph. The statement is proved. Consequence. When the pairwise weights defined by the metric function in space (1) between all non-invariant vertices of the initial graph are different, the minimal spanning tree is unique. Proof. Obvious. Note that in the case of different weights between non-invariant vertices of the original graph Algorithm A given in [1] will lead to the same solution as Algorithm B of the present paper. Namely, let us prove that the choice of initial vertex in Algorithm A [1] does not affect the final result. The following statement is true. Statement 4. When the pairwise weights defined by the metric function in space between all non-invariant vertices of the initial graph are different, Algorithm A [1] is invariant with respect to the choice of the starting vertex. Proof. Let us perform step 1 of Algorithm B, since Algorithm A “builds up” the connected components in any case. We obtain a new graph consisting only of vertices. The starting vertex can be “absorbed” by any vertex of the new graph. Then according to Statement 3, as a result of operations performed by greedy Algorithm B, the families are connected in the only way. The starting vertex in this case can be part of a family with power 2 at a certain iteration. The vertex itself will only affect at which iteration of Algorithm B it will be absorbed. But the edges that result from step 3 of Algorithm B will be identical to those of Algorithm A. Let us prove this. Two cases are possible. If there exists a vertex for the starting vertex with which a family of power 2 is formed, then this is equivalent to the situation of Algorithm B. Suppose the vertex closest to the starting vertex is a vertex that is part of another family (and accordingly, this edge is completed). On the other hand, in the course of Algorithm B there will be an iteration when the so-called starting vertex must be “absorbed”. According to the definition of the distance between subsets of vertices (1) this will happen with the formation of the same extended edge.
38
J. Terentyeva and B. Melnikov
Thus, Algorithm A and Algorithm B lead to one solution (the only optimal one). Both Algorithm A and Algorithm B can be used when solving the problem of constructing a connected graph with the minimum sum of weights of the completed edges. Note that the speed of Algorithm B is higher; the complexity of the algorithm and the experimental-time comparison will be presented in more detail separately.
4 Conclusion In this paper the condition of sufficiency of construction of minimal spanning tree for graph, in which weight function is formed by space metric, is revealed and proved. This property is practically used and is of practical importance for communication networks of real scales when solving problems of connectivity recovery under various scenarios of its operation. The proved sufficiency condition is real. The difference in pairwise distances between vertices of the graph, which is a mathematical model of communication network topology, is easily achieved by a high resolution of the real number (for example, for floating point number format in the IEEE 754 standard [8] the possible range of numbers is from 4.94·10–324 to 1.79·10308 ), which represents the distance between the vertices and takes place on communication networks of real scale. Consequently, the range of application of algorithms based on this condition can be extended to any real communication network. As a development of the investigation of this question, the hypothesis is put forward that the considered sufficient condition of optimality of the solution is also necessary. The proof of this fact is one of the topics of further research.
References 1. Melnikov, B., Meshchanin, V., Terentyeva, Y.: Implementation of optimality criteria in the design of communication networks. J. Phys. Conf. Ser. 1515, 042093 (2020) 2. Bulynin, A.G., Melnikov, B.F., Meshchanin, V.Y., Terentyeva, Y.Y.: Optimization problem, arising in the development of high-dimensional communication networks, and some heuristic methods for solving them. Inf. Commun. 1, 34–40 (2020) 3. Melnikov, B.: Discrete optimization problems – some new heuristic approaches. In: Eighth International Conference on High-Performance Computing in Asia-Pacific Region, pp. 73–80. HPC Asia, Beijing (2005). https://doi.org/10.1109/HPCASIA.2005.34 4. Kruskal, J.: On the shortest spanning subtree of a graph and the traveling salesman problem. Am. Math. Soc. 7(1), 48–50 (1956). https://doi.org/10.1090/S0002-9939-1956-0078686-7 5. Cormen, T., Leiserson, C., Rivest, R., Stein, C.: Introduction to Algorithms, 2nd Edn. MIT Press and McGraw-Hill (2001) 6. Prim, R.: Shortest connection networks and some generalizations. Bell Syst. Tech. J. 36(6), 1389–1401 (1957). https://doi.org/10.1002/j.1538-7305.1957.tb01515.x 7. Emelichev, V.A., Melnikov, O.I.: Lectures on Graph Theory. Nauka, Moscow (1990) 8. Kahan, W.: Lecture Notes on the Status of IEEE Standard 754 for Binary Floating-Point Arithmetic. University of California, Berkeley (1997)
User Interface Generated Distraction Study Based on Accented Visualization Approach Anton Ivaschenko1(B) , Margarita Aleksandrova2 , Pavel Sitnikov3 , and Anton Bugaets3 1 Samara State Medical University, 89, Chapaevskaya, Samara, Russia
[email protected]
2 Samara State Technical University, 244, Molodogvardeyskaya, Samara, Russia 3 ITMO University, 49, bldg. A, Kronverksky Pr., St. Petersburg, Russia
Abstract. There are considered the peculiarities of users’ perception of distracting user interface elements when performing visual search tasks. Research is based on the formal model of accented visualization. The study includes analysis of the user focus changes recorded by an external eye-tracker. Based on the results of the experiment, conclusions were drawn about the most distracting elements. The research results will be useful in studying the peculiarities of perception by users of interactive interfaces of immersive reality. #CSOC1120.
1 Introduction User Interface (UI) and User Experience (UX) design and implementation is a challenging stream of information technologies (IT) development nowadays. Modern IT supporting devices including smartphones, mobile wearables, head-mounted displays and headsets provide new capabilities of computer-human interaction that proactively involve the user in cooperation. This creates an immersive reality effect. Despite the growing popularity of such kind of technologies the problems of their usability and applicability are not studied in depth. In the gaming environments, the main requirements for this kind of user interfaces are limited to realism and attractiveness. In industrial applications or health care, this is not enough; it is necessary to ensure that users pay attention to certain interface elements according to the business process under the influence of the flow of large volumes of visual information. At the same time, the logic of perception of users of immersive reality as a whole differs from standard user interfaces. Within the framework of this problem, a specialized system was developed to study the peculiarities of users’ perception of modern interfaces with redundant additional information. Eye movement tracks recorded by an eye tracker were used as initial data. This article presents the results of an experiment that allowed us to draw conclusions about the most distracting content.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 39–44, 2023. https://doi.org/10.1007/978-3-031-35311-6_5
40
A. Ivaschenko et al.
2 State of the Art The problems of UI/UX design and development are widely studied in modern literature [1–3]. Designer has become a valuable profession capable of increasing the value of a software developing project by applying specific skills [4]. Understanding the fundamental principles of human-computer interaction and the individual characteristics of the perception of user interfaces by people [5, 6] can significantly improve the quality of engineering solutions in the development of computer applications. While developing the user interfaces it might be useful to study the fundamental principles of graphic content perception [7]. An essential aspect of the study of the UI perception peculiarities is the control of the user’s sight and focus, which is produced using the technology of accented visualization [8, 9]. Identification of the users’ behavior features can be performed based on the analysis of the history of his actions taken from an application log or captured using eye and action tracking systems. For example, eye trackers are effectively used in education nowadays [10]. Recent studies in the area of visual graphic content perception and UI/UX design are useful for new virtual scenes and environments generation as a part of immersive reality applications [11, 12]. Bringing positive ideas from gaming and VR modeling to industrial applications can help improving the efficiency of user interfaces. Event with all the benefits considered, the software of immersive reality should avoid users’ overloading with annoying information and redundant data. The last reproach, unfortunately, is quite often made against modern applications of virtual and augmented reality. On the basis of the study of recent research results in the area of immersive reality users’ perception analysis there was proposed a model of accented visualization capable of formalization of the user’s activity in mixed real and virtual scenes.
3 Accented Visualization Model A model of accented visualization was developed to describe, simulate and process the data, which describes the system users’ behavior in the form of events. The model contains the trinity of entities < focus, context, and overlay context > of a typical or specific user. Focus entity describes the user’s attention; context represents the situation, in which the user is expected to perform certain actions. Correlations between the event flows that describe the changes of these two entities represent the adequacy of the concentration of the users’ attention. Overlay context is specifically generated to help the user to concentrate attention or distract it. Therefore it can be used to simulate positive and negative influence over the user’s focus. Overlay context becomes a part of a computer-human interaction control system and introduces the changes of the user interface that finally influence the user’s behavior. According to this concept the system provides the user with the help of UI components a number of virtual objects and control elements wi,k , where i = 1..Nkw is the number of an element and k = 1..N s is the scene number. These objects and elements form the context: (1) sk = wi,k .
User Interface Generated Distraction Study
41
The users’ focus is described be an event chain captured by an e.g. eye-tracker: = {0, 1}. (2) vi,j,k = vi,j,k wi,k , ti,j,k The target focus can be described by another event flow: ei,k,n,m = ei,k,n,m wi,k , ti,k,n,m , ti,k,n,m ,
(3)
where ti,k,n,m - the moment of focus attraction to the object or control wi,k according to the scenario ck,n , ti,k,n,m - the corresponding possible deviation. Effective process requires correspondence of ei,k,n,m and vi,j,k , which means minimum of the issues of unprocessed scenario stages:
K ck,n =
⎛
⎛
ei,k,n,m · ⎝1 − δ ⎝
i,m
⎞⎞
vi,j,k · δ ti,j,k ∈ ti,k,n,m , ti,k,n,m + ti,k,n,m ≥ 1⎠⎠ → 0,
(4)
j
where δ(x) =
1, x = true; 0, x = false.
4 Results and Discussion To study the effect of user interface generated distraction there was developed software for a personal computer equipped with a Tobii eye-tracker. The user interface is in the WEB environment. The algorithmic part of the program is implemented in Java. The eye-tracking system program is implemented in C#. 30 users took part in the experiment. Each user had to find 5 extra letters on a square field among the same letters. Each subject had to pass 4 tests. The letters in the tests remained the same, but their arrangement changed. At the same time, when passing tests against the background of the main field, and “overlay context” appeared in front of the user, containing the following: 1. Test No. 1 - no contextual distracting elements. 2. Test No. 2 - the external background outside the main field was filled with the same letters that were presented on the main field (see Fig. 1). 3. Test No. 3 - the external background outside the main field was filled with GIFs depicting cats (see Fig. 2). 4. Test No. 4 - the external background outside the main field was filled with various GIFs. The results are presented in Fig. 3. Total time is the total time for passing one test, Distraction time is the time that the subject looked at the field outside the main field with the task. The diagram shows that test #1 and #2 took longer time for the users in general. At the same time, the highest proportion of distraction falls on test No. 2, which confirms the hypothesis that the use of the same letters against the background of the main task scatters attention.
42
A. Ivaschenko et al.
Fig. 1. Distracting UI: multiple similar elements.
Fig. 2. Distracting pictures.
This is due to the fact that the viewing area is artificially expanded due to the presence of the same letters in the external field, which leads to higher distraction of the subject’s attention. One can see that perception is an individual and rather complex process. The location of graphic objects on the screen affects the comprehensibility of the information presented, as well as the speed of its processing by the brain. Dynamic pictures require additional attention, but visual littering with multiple many similar objects are even worse in terms of user’s focus distraction. At the same time, this is exactly the effect that occurs in the interfaces of virtual or augmented reality during
User Interface Generated Distraction Study
43
Fig. 3. Task processing characteristics for different tests.
the automatic generation of controls and text prompts. This effect should be avoided in practical applications.
5 Conclusion The model of accented visualization allows developing the software for experimental study of UI/UX perception features. The performed research of user interface generated distraction allowed formulating recommendations for UI/UX designers that can help them to improve the quality of additional interactive elements such as e.g. contextual advertising. Such an approach can be specifically usable for immersive reality developers as soon as it provides additional information about the perception of complex virtual visual scenes. Acknowledgements. The paper was supported by RFBR, according to the research project № 20–08-00797.
44
A. Ivaschenko et al.
References 1. Roth, R.: User Interface and User Experience (UI/UX) design. Geographic Information Science & Technology Body of Knowledge. https://gistbok.ucgis.org/bok-topics/user-interfaceand-user-experience-uiux-design. Accessed 10 Oct 2022. https://doi.org/10.22224/gistbok/ 2017.2.5 2. Goel, G., Tanwar, P., Sharma, S.: UI-UX design using user centred design (UCD) method. In: Proceedings of the 2022 International Conference on Computer Communication and Informatics (ICCCI), Coimbatore, India, pp. 1–8. IEEE (2022). https://doi.org/10.1109/ICC CI54379.2022.9740997 3. Nurpalah, A., Pasha, M., Rhamdhan, D., Maulana, H., Rafdhi, A.: Effect of UI/UX designer on front end international. J. Res. Appl. Technol. 1, 335–341 (2021) 4. Malewicz, M., Malewicz, D.: Designing user interfaces. Hype (2020) 5. Soegaard, M., Dam, R.F.: The encyclopedia of human-computer interaction. Interaction Design Foundation (2014) 6. Dhengre, S., Mathur, J., Oghazian, F., Tan, X., Mccomb, C.: Towards enhanced creativity in interface design through automated usability evaluation In: Proceedings of the Eleventh International Conference on Computational Creativity ICCC20, pp 366–369 (2020). https:// doi.org/10.48448/dp01-f312 7. Bai, H.: The exploration of arnheim’s theory of visual perception in the field of art appreciation and review in junior high school. Learn. Educ. 9, 139 (2020). https://doi.org/10.18282/l-e. v9i2.1428 8. Ivaschenko, A., Sitnikov, P., Surnin, O.: Accented visualization for Augmented Reality Emerging Topics and Questions in Infocommunication Technologies, pp. 74–97. Cambridge Scholars Publishing, Cambridge, UK (2020) 9. Ivaschenko, A.V., Sitnikov, P.V., Diyazitdinova, A.R.: Accented visualization application in interactive manuals for technical training and support. J. Phys: Conf. Ser. 1691, 012122 (2020) 10. Lyamin, A.V., Cherepovskaya, E.N.: An approach to biometric identification by using lowfrequency eye tracker. IEEE Trans. Inf. Forensics Secur. 12(4), 881–891 (2017) 11. Cochrane, T., Sissons, H.: An introduction to immersive reality. Pac. J. Technol. Enhanced Learn. 2, 6 (2019). https://doi.org/10.24135/pjtel.v2i1.28 12. Nelson, Jr.H.: Immersive Reality - an Introduction. In: Proceedings of the Offshore Technology Conference. Houston, Texas, US (2013). https://doi.org/10.4043/13005-MS
Information Security Enhancements of the University’s Automated Information System Dmitry Tarov, Inna Tarova, and Sergey Roshchupkin(B) Bunin Yelets State University, Yelets, Russia [email protected], [email protected]
Abstract. The article is aimed at studying means and methods for improving the information security of the automated information system of a modern university. The authors consider the Russian educational system as part of a modern society that reproduces its structure as a centralized, hierarchical management system, which, in turn, makes it possible to incorporate central development planning mechanisms into the management process. At the same time, the authors believe that this makes it possible to apply the same integrated approaches in the process of automating university management as in corporate security systems, including organizational terms. The article specifies the regulations to be taken into account when developing and building an automated university information system and ensuring its information security. Analyzing the options for developing the information system of the university, the authors conclude that the best way to proceed is to develop one’s own design, highlighting its most important aspects. As key means of ensuring the information security of the automated information system of the university, the authors propose its segmentation and virtualization of user workplaces. #CSOC1120.
1 Introduction Modern universities are making extensive use of network resources, the need to integrate which into automated information systems with subsequent protection against network threats encourages the development of concepts for an intelligent information and telecommunication university security system [1]. At present various aspects of this problem are being actively investigated, for example, threat analysis of the network infrastructure of the university [2], assessment models [3], the vulnerabilities of network platforms providing the dissemination of experimental data are investigated [4], the possibilities of various artificial intelligence training algorithms for detecting network attacks are being studied [5], the work of specialists in information security is arranged in Russian higher educational establishments [6], manuals are being developed on the organization of computer incident prevention teams and the development of new security policies [7], legal aspects of countering network threats are considered [8]. The Russian educational system is an integral element of the Russian society, it has repeated and repeats with its structure a centralized, hierarchical management system, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 45–53, 2023. https://doi.org/10.1007/978-3-031-35311-6_6
46
D. Tarov et al.
which, in turn, makes it possible to use the mechanisms of centralized development planning in the management process, both of the entire system and its elements, in particular - Russian universities [9]. This makes it possible to apply the same methods to the automation of the management of a university as those used in the automation of the management of an industrial production or a corporation in a planned economy, which makes it possible to rely on established, traditional rules for organizing document flow and office work. This fact allows us to use the same integrated approaches in the development and operation of the information security system of the university as in corporate security systems [10], including in organizational terms [11]. However, this type of management arrangement traditionally has resource limitations, both human and logistical, which inevitably complicates the automation process [12]. Experience with vertical organization management models shows that automation at the grassroots level is limited to the functional responsibilities of staff members and to the tasks of business units at the medium level. At the highest level of the organization, the process of automation of management affects not only the coordination of the work of structural units but also, relying on data received from the lower and middle levels of management, it requires the prognostication inherent in expert systems. Of course, all of the above is true in relation to the organization of automation of the university management process. The purpose of this study is to find ways to build a university information security system, as well as to increase the efficiency of the existing university information security system, which, in addition to the usual requirements for such systems, also meets the following criteria: firstly, minimizing the costs of development, construction and subsequent operation; secondly, the use, in developing and subsequent operation of the system of software and hardware components of minimally endangered sanctions restrictions, or the use of such components that.
2 Methods All of these features require the creation of the maximum possible automation of user functions of the lower level of management due to their large number and influence on the activities of the middle and higher levels of management through the data provided. For example, support for automatic reporting, maintenance of statistics and maintenance of electronic workflows according to forms and standards embedded in the system and adjusted at the highest and middle levels. This can be the automation of accounting calculations related to the appointment and change of official salaries, student scholarships, etc., which reduces the time spent on the performance of official duties, which, on the one hand, leads to an increase in labor efficiency, and on the other hand, ensures the interest of users in the timely support of the relevance of the data loaded into the system. Due to the impossibility of influencing the educational process by the developers of the automated control system, as well as the simultaneous exposure of the educational process to changes due to changes in educational standards, the possibility of flexible adaptation of information systems included in the university management system at the subsystem level, should be sown even at the conceptual level. For example, the correction of curricula with the introduction of new GEF, the variability of disciplines studied by students of the same group, the change in the status of students not within the semester, but according to the dates specified in specific orders, etc.
Information Security Enhancements
47
The information system of an educational establishment, including a university, is very similar to the information systems of modern enterprises: it collects stores, processes and issues, upon request, significant amounts of data, among which there are confidential, for example, personal data of employees, teachers and students, information, relating to intellectual property or containing commercial or state secrets. In the framework of our study, the university information system will be understood as a software-hardware complex of interconnected information systems and the data it processes. It should be noted that the university, being a public place, has a changeable audience, as a rule, with fairly high competencies in the field of information and communication technologies, which dictates specific requirements for the construction of user authentication systems and information security. It should also be borne in mind that the legal educational activities of students studying in IT-related fields can be interpreted by traditional corporate means of information security as attempts of unauthorized activities in the network, which requires special approaches to localizing the network activity of computer science training laboratories. The university’s information system is also characterized by a great variety of scientific, project and educational methods, a developed system of support services, and a considerable geographical dispersion of branches, the need for continuous active monitoring in external networks, including on the Internet. In terms of the construction of authorization and user authentication systems, consideration should be given to the need for frequent changes in the status of staff members, especially those at different levels of education. Partially, the problem is solved by the strict hierarchy of the organizational structures of the university, which makes it possible to develop and to implement a set of measures to ensure the information security of an educational institution and in the process of its automation to rely on the administrative and legal resource. Automated monitoring of the relevance of accounting data is the basis for the automation of middle- and upper-level users, whose requirements for an automated management system are provided in the form of reliable statistical information, on which management decisions are made. The complexity of the management system, due to the complex structural organization of the university, leaves no room for doubt about the need to introduce a set of interconnected information systems that automate the main functions of the educational process and combine the main structures of the university at the data flow level (rector’s office, directorates of institutes, personnel department, accounting, legal department, etc.). Three ways of creating automated information systems are now available for corporate users: purchase and installation of ready-made software and hardware, construction of a control system on the basis of ready-made ERP systems, and finally, self-development [13]. From the point of view of building an automated university information system, all these options have not only positive but also negative aspects. For example, the purchase of a ready-made hardware and software system with its subsequent support, on the one hand, removes the problems with its development, installation and support, and is also the least financially costly, but on the other hand, it does not fully satisfy all the needs of the customer due to its unification with other developments of the contractor. In addition, the absence of all-Russian regulations and the widespread use by Russian universities of their own internal procedures and regulations on the
48
D. Tarov et al.
organization of studies, Research and other activities serve as an additional deterrent to the dissemination of ready-made corporate management automation solutions. The development of an automated information system based on ready-made ERP systems will be more expensive than buying a ready-made solution, especially in the long run due to significant license fees. In addition, the construction of such a system would be labor-intensive because of the need to customize ready-made development to the specific needs of the user. The third option seems to us to be the most optimal, i.e. development of the university’s own automated information system. Let’s highlight those aspects that we consider to be the main ones for building an automated information system of the university: economic, organizational, technical and technological. Considering the creation of an automated university information system from an economic point of view, it should be remembered that its cost consists of many components: the cost of design, equipment, license fees for the software used, both used in the development of the information system and used in its composition. To these payments, it is necessary to add the amounts allocated for the training of personnel and the operation of the created information system. When organizing work to create a university information system, firstly, internal university regulations and regulations should be coordinated with each other, i.e. standardize workflow and formalize data flows within the university. At the same time, it is necessary to adapt the information systems already available in the educational institution and provide for the regular possibility of integrating new provisions, regulations, and federal standards. Secondly, it is necessary to create a specialized structural unit responsible for the deployment, operation and further modernization of the university’s automated information system. Thirdly, on the basis of the created structural unit and IT departments, it is necessary to create and regularly conduct refresher courses for users of the university’s automated information system. Considering the creation of an automated information system of the university from a technical point of view, it should be taken into account that the automated information system of the university must maintain a high degree of availability of the services offered, protection against data loss due to software and hardware failures, unauthorized access or user errors, i.e. it should provide network traffic filtering at the level of network protocols and addresses [14]. Therefore, the information system must be mounted on high-performance, modern hardware that provides acceptable speed and stability of access with simultaneous access by many users and provides data backup, and in some cases in real time. In addition, user places should be provided with peripheral equipment that provides users with the ability to perform their job duties. From a technological point of view, the developed automated information system of the university must comply with the laws and regulations of the Russian Federation. In particular, the system being developed must comply with the concept of the Ministry of Education and Science of the Russian Federation regarding the creation of a single digital platform: it must provide data at the request of users according to the Data-as-a-Service (DaaS) model; it should provide a remote access mode for users, including foreign ones; it must support integration with external, including foreign information systems; it must implement technological interfaces to systems that are part of the information system of
Information Security Enhancements
49
the university as subsystems and contain methodological, educational, scientific, technical and scientometric information, as well as interfaces for services of the scientific infrastructure of collective barrier-free access [15]. From our point of view, a distributed database created on the basis of a certified multiplatform database management system (DBMS), for example, Oracle, DB2, etc., will most fully meet the specified requirements. The university’s automated information system being developed should contain the software needed for the normal functioning of the university’s structural units and services. The system should be provided with multi-level user authorization and authentication system and an integrated information security service, it should use secure data transfer protocols and virtualized user workstations. From the point of view of architecture, the system to be developed, on the one hand, should have a single information space for the entire university, but, on the other hand, it should have services providing services to such departments as the administration, accounting, personnel department, the legal department should be allocated to separate isolated clusters, as well as computer science laboratories where students take classes. The development and deployment of an automated university information system are divided into several stages. At the first stage, the functionality of the structural units and services of the university is specified, data flows are identified, both between them and within them, which makes it possible to build with sufficient accuracy the scheme of the actual document flow within the university, which allows creating a model of the electronic document flow of the organization. In the second stage, the resulting schemes and models make it possible to create a hierarchical model of the structure of the university, reflecting the functioning of the structural units and services of the university and the flow of documents between them. At the third stage, the resulting model allows us to select, taking into account the needs of developers and the job responsibilities of users, a software package, including software integrated into the system being developed and equipment, with the help of which the automated information system of the university will be developed and further operated and modernized. From the point of view of ensuring the information security of the system being developed, we single out the levels requiring different approaches: 1) hardware components of the network, user workstations, network infrastructure and data storages; 2) a complex of system and network software, including authorization and authentication subsystems, antivirus software, network services, both provided by the information environment of the university and external services; 3) system and application software of user workplaces. It should be noted that before proceeding with the design and development of an automated information system of the university, it is necessary to harmonize the technical and organizational requirements for organizing information security between the university services and develop appropriate provisions common to the entire organization, as well as requirements for the software and hardware components of the future information system. It is regrettable to note that there is often a lack of uniform regulations and provisions for the information security of universities, as well as the financing of the relevant services on a residual basis, which leads to the use of the cheapest hardware components and heterogeneous software solutions that are not always consistent both with each other and user requirements. The situation is somewhat better with the introduction of university client-server systems created on the basis of
50
D. Tarov et al.
common software solutions for companies and allowing organizing access not only to the internal resources of the university but also to the external information resources necessary for its activities. An analysis of external and internal network threats that university information systems are usually exposed to allow us to identify several potentially dangerous areas: 1) external network traffic; 2) workplaces of users who do not have the necessary competencies sufficient to ensure an appropriate level of information security; 3) computer laboratories accessible to students with sufficient knowledge to pose a threat to the information security of the university. After analyzing the threats to which the automated information system of the university is exposed, we indicate the following necessary actions: 1) the identification and ranking of the university information system facilities most attractive to unauthorized access; 2) evaluation of the information security management system applied by the organizational units and services, as well as the university as a whole; 3) identification of organizational, software and hardware vulnerabilities of the information system, compilation of their ranked list and development of a system of measures to eliminate them; 4) assessment of potential damage from data loss due to software and hardware failures or unauthorized access. Based on the experience of the administration of the university’s information system, we will identify groups of potential violators of the university’s information security: • administrators of the university’s automated information system, who have access to all subnets of the university’s computer network and, therefore, have access to almost all information. Threat countermeasures can be reduced to network segmentation and restricting access to network administrators only to those segments for which they are responsible; • users of the information system, including students, who have access to a specific user workplace and relevant educational or job responsibilities within the automated information system of the university. Measures to counter this threat can be reduced to monitoring user activity and restricting access within a particular network segment; • external network traffic that poses a threat of unauthorized access to confidential data, disruption of network connections, infection of the university’s automated information system with computer viruses, etc. Countermeasures against this threat consist of continuous automatic monitoring of external connection attempts, regular updating of antivirus databases and software integrated into the system. Our work experience allows us to localize the objects of the university information environment that are most susceptible to network threats and need constant protection: 1) a management subnet containing local networks of the administration, directorates of institutes, accounting departments, methodological, legal and personnel departments, because they contain confidential information and personal data of employees, teachers and students; 2) a subnet that serves the scientific and technical activities of the university and includes local networks and servers of research and technical laboratories, servers of distributed databases, because they may contain information related to commercial or state secrets; 3) a subnet that provides educational activities, containing local networks of educational computer laboratories and libraries, to which students with sufficient knowledge in the field of information technologies have access, which can pose a threat
Information Security Enhancements
51
to the information system of the university. To ensure an appropriate level of information security, i.e. to exclude unauthorized access to confidential data or their loss due to software and hardware failures, we consider it expedient to segment the information system of the university, i.e. isolate the control subnet and the subnet serving scientific and technical activities from the subnet providing educational activities and from external traffic. From our point of view, to ensure an appropriate level of information security, the control subnet from external resources should have access only to state resources and an internal mail server that exchanges e-mail through the mail server of a scientific library belonging to the subnet that provides educational activities. We understand that this is a potentially dangerous place, but we have to put up with it, especially since it is easier to protect one channel from external network threats than several channels. The subnet providing scientific and technical activity is completely isolated from external traffic and has no communication channels with other subnets of the university information system. When constructing the control subnetwork and the subnetwork providing scientific and technical activities, a software and hardware complex was used to provide backup of data, including in real time. We consider the virtualization of user workplaces to be one of the key elements in ensuring an appropriate level of information security of the university’s information system. Virtualization of user workplaces is provided by allocating to users (employees, teachers, and students) the software and hardware resources of the university information environment necessary to fulfill their job or training duties without being tied to specific hardware systems and allowing them to isolate their computing processes at the logical level. In the case of building the information environment of our university, the virtualization of user workstations is achieved through their work in the terminal mode. Furthermore, the user displays an ordinary desktop with all the elements of the interface, i.e., the user desktop, in this case, is a normal directory of the file structure of the remote computer, visualized by means of the user interface. From a functional point of view, the virtualization of the user’s workplace implies remote access to the university’s information system through a user interface controlled by a system administrator whose duties will be traditional: • control of the information security event log and automatic logging of the actions performed by users of the university information system; • ensuring the group policy of the university information environment: creating, editing, deleting user accounts; • regular archiving of critical data and, if necessary, monitoring of automatic data backup and data preservation; • ensuring the proper level of information security of the entrusted segment of the information system.
3 Results and Discussion In the above-described way of organizing an automated information environment of the university, system administrators responsible both for their segments of the system and for the entire information system as a whole have the ability to automatically monitor in real time the user virtual workstations connected to the information system, which can
52
D. Tarov et al.
be provided both by standard means operating system, for example, WindowsPerformanceAnalyzer, and third-party software tools, both free - Ntopng, Zabbix, Observium, and paid - vFabricHyperic, HPOperationsManager, etc. The advantages of virtualization of user workplaces include the possibility of organizing remote work of employees and students, automated monitoring of the information system of the university and saving the organization’s funds when purchasing and upgrading elements of the software and hardware complex. Separately, it is worth noting that the virtualization of user workplaces contributes to the resolution of the ethical and legal problem, when users of an information system in its usual sense, i.e. associated with a specific software and hardware complex, in order to ensure information security, organizations are forced to take on responsibilities and bear a responsibility that goes beyond their immediate job responsibilities (account management, saving data related to their job responsibilities, qualified configuration and operation of hardware and software parts of the workplace, etc.). Virtualization of employee workplaces allows transferring responsibilities related to the storage of confidential data, the qualified use of the network hardware and software complex, and ensuring the information security of the university to system administrators who have the necessary competencies.
4 Conclusion Virtualization of user workstations as a means of improving the information security of the university’s automated information environment makes it possible to reduce the threats to the information security of the university associated with the instability of their work, due to the lack of unification of the elements of the software and hardware complex, and unauthorized access attempts. The solution to the problem, in addition to the virtualization of user workstations, is the introduction of two-factor authentication of users through personal keys, tokens or smart cards, which makes it possible to unify access systems to various segments of the information system. Practice shows that in some cases it is permissible to use user smartphones instead of personal keys, tokens or smart cards in the process of two-factor authentication of users, provided that the SIM-cards for them are issued and serviced by the university. The widespread use of smartphones and the introduction of fingerprint scanners even on inexpensive models allow them to be used in conjunction with software that supports fingerprint authentication, such as WhatsApp, for the authentication of students in computer training laboratories. However, it is more productive in this case to use clientserver-written programs, the client part of which is located on users’ smartphones and identification is carried out by a fingerprint, the user’s identification data is transmitted via a Wi-Fi connection, which allows the server part located on the server to open an account. Login is automatically recorded by the security log. From our point of view, this is the optimal price/quality ratio. The authors express their gratitude to the management of Bunin Yelets State University for the financial support of this study.
Information Security Enhancements
53
References 1. Amosov, O.S., Baena, S.G.: Design concept of the intelligent information and telecommunication system of university security. In: Proceedings of 2018 3rd Russian-Pacific Conference on Computer Technology and Applications (RPC), pp. 1–5. Vladivostok (2018) 2. Anikin, I.V.: Information security risks assessment in telecommunication network of the university. In: Proceedings of 2016 Dynamics of Systems, Mechanisms and Machines (Dynamics), Omsk, Russia, pp. 1–4. IEEE (2016) 3. Ramanauskait˙e, S., Urbonait˙e, N., Grigali¯unas, Š, Preidys, S., Trink¯unas, V., Venˇckauskas, A.: Educational organization’s security level estimation model. Appl. Sci. 11, 8061 (2021) 4. Yazawa, S., Sakaguchi, K., Hiraki, K.: GO-E-MON: a new online platform for decentralized cognitive science. Big Data Cogn. Comput. 5, 76 (2021) 5. Ahmed, H.A., Hameed, A., Bawany, N.Z.: Network intrusion detection using oversampling technique and machine learning algorithms. PeerJ Comput. Sci. 8, e820 (2022) 6. Boltyonkova, E., Andreev, A., Doronin, A.: Development of measures to ensure information security in structural division of the university. In: Proceedings of International Scientific Conference on Energy, Environmental and Construction Engineering (EECE), p. 08005. Saint-Petersburg (2019) 7. Villegas-Ch, W., Ortiz-Garces, I., Sánchez-Viteri, S.: Proposal for an implementation guide for a computer security incident response team on a university campus. Computers 10, 102 (2021) 8. Skopik, F., Settanni, G., Fiedler, R.: A problem shared is a problem halved: a survey on the dimensions of collective cyber defense through security information sharing. Comput. Secur. 2016, 154–176 (2016) 9. Abdikeev, N.M., Zav‘yalova, N.B., Kiselev, A.D.: Corporate Information Control Systems. Moscow (2014) 10. Goryunova, V.V., Goryunova, T.I., Molodtsova, Y.V.: Integration and security of corporate information systems in the context of industrial digitalization. In: Proceedings of 2020 2nd International Conference on Control Systems, Mathematical Modeling, Automation and Energy Efficiency (SUMMA), Lipetsk, Russia, pp. 710–715. IEEE (2020) 11. Kim, Y., Kim, B.: The effective factors on continuity of corporate information security management: based on TOE framework. Information 12, 446 (2021) 12. Orlov, S.N.: Internal audit in a modern corporate management system. Moscow (2019) 13. Maksimyak, I.N., Potapov, M.L.: The need for comprehensive automation of educational activities management of higher education institutions in modern conditions. Sci. Educ. Probl. Ideas Innovations 2019, 71–74 (2019) 14. Tarov, D.A., Tarova, I.N.: International Journal of Applied and Fundamental Research 12, 131–135 (2019) 15. Concept for the creation of a unified digital platform of science and higher education of the Ministry of Education and Science of Russia. https://minobrnauki.gov.ru/files/20190705_Kon tseptsiya_ETSP_1.4.9.pdf. Accessed 10 Oct 2022
A Review of Theories Utilized in Understanding Online Information Privacy Perceptions William Ratjeana Malatji(B) , Rene VanEck, and Tranos Zuva Department of Information and Communication Technology, Vaal University of Technology, Andries Potgieter Blvd, Vanderbijlpark 1900, South Africa [email protected], {Rene,tranosz}@vut.ac.za
Abstract. Researchers from different fields of study applied numerous theories to study and understand users’ online information privacy concerns, behaviours, attitudes, preferences, as well as the outcomes of their behaviours on the website. Up to date, there are only a few studies that reviewed and integrated theories in the literature to produce an integrated theoretical framework for online information privacy. This study reviewed twelve theoretical frameworks used in online information privacy. It followed the common approach of the literature review. The initial search resulted in over 102 studies from ScienceDirect, EBSCO and Scopus databases. The findings of the study revealed that reviewed theoretical frameworks have limitations recognized in the literature which include conflicting norms/duties, as well as overlapping spaces, minimization of emotional response, ignorance of biological differences and hormonal responses, missing of constructs, such as trust and explicit focus on the individuals. There were also criticisms of reviewed theoretical frameworks noted in the literature such as oversimplified views of social influence on individuals, theories not falsifiable and the use of surveys as a form of cultural inquiry. Finally, the findings of the study revealed open issues in online information privacy such as hidden cookies, unsecured location sharing, scanning cyber security, as well as data never forgetting individuals. The study recognizes that there are various theories or frameworks in this field, all of which deal with privacy behaviour, implying that studies in the future should combine theories or frameworks to better understand users’ online information privacy perceptions. Keywords: online information privacy · theoretical framework · information disclosure
1 Introduction In the study of the interface between society and internet technology, privacy has emerged as a fundamental concern. Scholars on the individual level, shift their focus towards internet users’ concerns in respect of online privacy, including users’ behaviour following these concerns [1]. The online sharing of personal information and its indications for users’ behaviour, as well as, policies and practices of the website, were found to be the most important topics in this issue. However, this topic is addressed by researchers of various disciplines from one analytical level [1]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 54–67, 2023. https://doi.org/10.1007/978-3-031-35311-6_7
A Review of Theories Utilized in Understanding Online Information
55
A lot of research was done on users’ concerns regarding information privacy from many theoretical approaches [2]. Academics from various disciplines adopted numerous established theories in research, with the inclusion of theories such as the theory of planned behaviour, theory of reasoned action, the social presence theory, as well as, procedural fairness theory, etc. to study online information privacy. However, in the literature, the integration of theories and a systematic review has yet to be completed [2]. As the popularity of online platforms grows, so does the potential for addiction to these technologies. However, a theoretical knowledge of how online platform habit grows still lacks and this calls for the review and integration of theories and models to explain users’ online information privacy [3]. Previous studies [4, 5] noted that the gap of lacking studies that review and integrate theories of online information privacy leaves numerous limitations in the literature. Firstly, while the same phenomenon, namely, “privacy-driven behaviour”, has been investigated by numerous theories, it is discussed from different views or prospects with different prominence: other theories concentrate on the factors of the organization that have an influence on individuals’ privacy perceptions, while some concentrate on persons’ internal reactions to outside factors [4, 6]. Such disparities in a theory emphasis show that combining various theories in one study could lead to more successful outcomes in terms of comprehending the phenomenon, necessitating a review, juxtaposition, and integration of ideas [2]. Secondly, although the theories approach issues of privacy from various angles, the existing linkages between them must be understood. The privacy calculus theory, for example, is one typical way to assess the behaviour of individuals with respect to disclosure of information, implying that one’s desire to release information depends on a juxtaposition of anticipated benefits, as well as perceived risk under a particular scenario [7–9]. Lastly, aspects of theories without sufficient attention in the literature need to be identified and reinforced in future studies [3]. This paper examines twelve theories regarding information privacy in online platforms. It offers a general overview of the theoretical foundations for the field, even though numerous studies have been undertaken in this field [1–3, 10, 11]. Therefore, the study is arranged as follows: section two provides the methodology, section three provides the review presentation, section four gives the challenges and open issues in online information privacy and section five provides the study limitations and implications, as well as, practices, followed by the conclusion.
2 Methodology The standard approach of conducting the literature review was followed in this study. According to the study objectives, the initial step was to choose theories from the literature. In this study, “a theory refers to the statement of the relationship between variables or constructs and established theories are those that contain specific sets of variables or constructs in certain relationships that are consistently studied across literature” [2]. The next step was to select theories that were researched at the personal level. The final step was the inclusion of empirically tested theories. This study excluded theories which were not tested at the personal level, as well as, those that were not empirically tested. The three above-mentioned steps were used as a selection criterion of articles from the literature.
56
W. R. Malatji et al.
The search focused on articles published since 1996 when a widely used scale to assess data privacy concerns was created [12]. Over 102 studies were found through online search from ScienceDirect, EBSCO and Scopus databases. The next section discusses the review of theories together with definitions.
3 Reviewing Theories Online privacy, “sometimes called internet privacy or digital privacy, relates to how much of your personal, financial, and browsing information is kept secret”. This continues to be a growing concern, as browsing history and individual information are both at risk when utilizing the internet [13]. Researchers from different fields of study applied numerous theories to study and understand users’ online information privacy concerns, behaviours, attitudes, preferences, as well as the outcomes of their behaviours on the websites. This section provides a review of established theoretical frameworks that were applied in these studies. Theory of Reasoned Action (TRA) The theory of reasoned action aims to provide full details about the relationship between user attitudes and behaviours within human action. This theory is mainly used to predict how persons will behave based on their pre-existing attitudes and behavioural intentions [14]. According to this theory, the decision of an individual to engage in certain conduct is dependent on the results that the individual anticipates as a result of doing so [14]. Recent studies adopted this theory in their online information privacy studies. For example, to address the inconsistency issue of research findings in the field of online information privacy, [15] used TRA, as well as TPB and conducted a study on “A meta-analysis to explore privacy awareness and information disclosure of internet users.” Following [15, 16] integrated TRA and privacy-trust-behavioural intention model (PTM) to find out “if privacy assurance on social commerce sites matter to millennials.” The motivation for the study undertaken by [16] was due to inadequate research on privacy issues, although trust has been extensively studied in the literature. Similar to [15] and [16], TRA was utilised by [17] with the combination of social role theory to analyse gender dissimilarities in individuals’ information-disclosure decisions on SNS. The study was conducted after they have noticed that, women and men use information technology differently, according to data from information systems research. However, there has not been enough research done to explain why these discrepancies exist [17]. Theory of Planned Behaviour (TPB) TPB was created as a result of TRA, and it suggests that, in addition to attitude and subjective norm, a person’s perceived behavioural control (PBC) influences behavioural intention [18]. Similar to TRA, the TPB’s overall goal is to anticipate and describe individuals’ behaviour under specific circumstances. The theory of planned behaviour suggests that a behaviour is determined by behavioural intention, as well as behavioural control [18]. It specifies that the determinants of behavioural intention include not only the attitude toward the behaviour and the subjective norm, as in the TRA but also the perceived behavioural control [18]. This theory was adopted by recent studies in the area of online information privacy. For example, [19] noted that even though individual
A Review of Theories Utilized in Understanding Online Information
57
information privacy is recognized in the marketing literature as a fundamental theme for both online and offline contexts, the existing literature focused only on trust and perceived risk in privacy concerns. Furthermore, available studies focused only on privacy’s direct impact concerns in the actual online purchase or online purchase willingness [19]. To address this gap, both theory of planned behaviour and technology acceptance model was integrated to examine how privacy concerns regarding the internet have an influence on the user’s willingness to make online purchases [19]. Later on, [20] noted that many studies were conducted to investigate privacy concerns and intentions of customers to share personal information online, however, only a few studies focused on Saudi Arabia. As a result, they used TPB with the combination of TRA and privacy calculus theory to investigate the factors that hinder or motivate a consumer’s intention to share personal information on e-commerce websites [20]. Likewise, [21] combined TPB and TAM to discuss and analyse the influence of the perceived risk on users’ online shopping willingness. Hofstede’s Cultural Dimensions Theory This theory was developed to comprehend the cultural differences between countries and to distinguish ways of doing business between different cultures. To be specific, the framework is used to discern the various national cultures, cultural dimensions, and evaluate their impact on a commercial environment [22]. The theory suggests that to be able to have respectful cross-cultural relations, we have to be aware of the cultural differences [22]. Similar to other theories, this theory was adopted by recent studies in online information privacy. For instance, [23] noticed that numerous studies evaluated the privacy concept across cultures and legal jurisdictions, however, there is no universal consensus on the privacy definition even though similarities are present. To address the gap, a study was conducted on “the relationship between culture and information privacy.” This study adopted some measuring variables from Hofstede’s theory of cultural dimensions [23]. Likewise, [24] realized that many studies that were conducted on cloud storage focused only on the user willingness or experience related to the utilization of cloud storage, however, there were only a few studies that have focused their research on security and privacy-related issues in cloud storage. As a result, they conducted an empirical study to understand users’ intention to put their information on the individual cloud-based storage applications [24]. This study adopted Hofstede’s theory of cultural dimensions with the combination of Communication Privacy Management Theory (CPMT). In this study, cultural dimensions were used as moderating factors [24]. Technology Acceptance Model (TAM) TAM was developed based on TRA to predict and describe information systems use by end-users. By incorporating a small number of key variables derived from the literature on cognitive and affective determinants of technology adoption into the theoretical framework of TRA, the model allows for the tracking of the successive impact of external variables on an individual’s beliefs, attitudes, intentions, and behaviours [25, 26]. TAM indicates that the use of a specific technology is explained by behavioural intention, following the TRA. In turn, the attitude toward the use and perceived usefulness both influence the intention. Both perceived usefulness and perceived ease of use account for the attitude. The TAM also maintains that perceived usefulness determines perceived ease of use. Finally, external variables might influence the perceived ease of use and
58
W. R. Malatji et al.
usefulness of a product [25, 26]. In online privacy, TAM has been employed both in its original form and in combination with other consumer behavioural models, such as the TPB. It is anticipated that combining the TPB and the TAM will improve the explanatory and predictive capacity of online purchasing behaviour. Similar to the theory of planned behaviour [19] and [21] adopted TAM together with the TPB to conduct their online information privacy studies. A Revised Unified Theory of Acceptance Use of Technology (UTAUT2) This theory was formulated to explain the users’ intention to utilize an information system and subsequent utilization behaviour. In this theory, age, gender, voluntariness, as well as experience of use, are postulated to moderate the influence of the four main constructs on usage intention and behaviour [27]. Previous studies created and empirically put to test a model to foretell “factors that have an impact on students’ behaviour towards using mobile learning. [28]” The study extended the model to include trust and perceived risk as one of the factors to predict “students’ behaviour towards using mobile learning. [28].” Following [28, 29] noticed that too much research work was done, but did not consider privacy issues in their studies. Though they are aware of privacy issues, they did not practice the implications of privacy in their systems. Thus, [29] conducted a study that adopted UTAUT2 on the “motivation information system engineers’ acceptance of privacy by design in china: An extended UTAUT model.” This shows that UTAUT is being extended and adopted in the area of online information privacy. Agency Theory The agency theory is utilized to describe and provide solutions to the problems in the interaction between business owners and their agents [30]. This theory is utilizing resources of a principal by definition. The agency theory suggests that both the principal and agent’s interests are not aligned every time [30]. Moreover, the theory implies that because knowledge of the agent’s behaviour is frequently inadequate and asymmetric, the principal is unable to properly supervise the agent’s behaviour prior and after transactions, allowing the agent to serve his or her self-interests instead of the principal’s [30]. The agency relationship is in existence in online transactions because the customer namely “the principal” submits personal information to the merchant namely “the agent” in exchange for goods and services [2]. Both parties are self-interested, however, information asymmetry is in favour of the online merchant who gathers and utilizes customers’ information prior and after transactions. That is where uncertainties like perceived risk come into existence for the use of information. Thus, the customer needs to decide on whether to share information to take part in the transactions and if that is the case, how can potential risks be mitigated [2]. A good example of a study that adopted this theory for online information privacy was the study conducted by [31] on the “ethical exploration of privacy and radio frequency identification” adopted the agency theory. Social Contract Theory Another tool for determining the source of consumers’ privacy issues is the social contract theory. This theory suggests that providing personal information to a digital merchant entails not lone an economic exchange, namely “purchasing goods and services” but it includes also a social exchange, namely “forming relationships”, and that the social
A Review of Theories Utilized in Understanding Online Information
59
contract, explained as the parties’ mutually comprehended obligations or social norms, is critical to preventing the merchant’s opportunistic behaviour to misuse customer information [32, 33]. In online information privacy, [34] noted that some measures are taken to safeguard user privacy incorporate procedural fairness and consumer privacy, however, there is a doubt that such actions to safeguard privacy have been successful. As a result, [34] conducted a study on “online privacy and security concerns of consumers.” The study integrated the social contract theory, TRA, as well as TPB to develop the conceptual framework. Later on, [35] adopted social contract theory on their study titled “information privacy policies: the effect of policy characteristics and online experience.” Following [35, 36] adopted the same theory in their study titled “understanding privacy online: development of a social contract approach to privacy.” The purpose of this study was to evaluate ways in which privacy standards grow through social contract narrative, to explain violations in privacy provided the social contract approach, as well as to review the role of business as a contractor in the development of privacy standards critically [14]. Social Presence Theory This theory was originally defined by [37] as “the degree of salience of the other in the interaction and the consequent salience of the interpersonal relationships.” The theory was then later refined by [38] as “The degree to which a person is perceived as a ‘real person in mediated communication.” According to this theory, the level of social presence for a specific activity should correspond to the level of interpersonal interaction required for the work. With regards to website communications, the theory states that the similarity between the online environment and physical engagement with a human agent creates a sense of social presence and establishes the website as a trustworthy social actor, reducing privacy concerns [4]. Thus, another way for online businesses to alleviate clients’ privacy concerns is to boost the website’s social presence. Most recent studies adopted this theory in their studies for online privacy. For example, it was noted by [39] that regardless of being important concepts in the human study, behaviours, attitudes, subjective norms, as well as the social presence in the literature received little attention. There are only a few studies that have surveyed the influence of social presence on privacy concerns which produces results that are inconsistent [39]. Thus, they adopted social presence theory in their study to examine the influence of disposition in regards to privacy, the website’s social presence on an individual’s privacy concerns, as well as the subjective norm disclosure. Following [39, 40] adopted the same theory in their study of “simulating the effects of social presents on trust, privacy concerns and usage intention in automated bots for finance.” Social Response Theory According to the social response theory, even though computers do not have feelings, identities, or human motivations, humans tend to perceive them as social actors rather than a medium [41]. People follow social rules or social behaviours while responding to computers that exhibit humanlike traits or sending social cues when they are presented with IT that has a set of human characteristics (e.g., interactivity) [41]. In the field of information privacy, [42] noticed that users are not aware of the privacy risks that are involved when they disclose personal information in the network. To address the abovementioned gap, the study was conducted to measure privacy in online social networks.
60
W. R. Malatji et al.
This study utilized the social response theory to measure privacy leaks [43]. Likewise, [44] described a network-aware privacy score that upgraded the measurement of user privacy risk according to the network characteristics. This study also adopted the social response theory. Social Cognitive Theory The social cognitive theory takes into account specific methods in which users acquire and sustain behaviour, also the social setting in which they conduct it. This theory observes an individual’s past experiences, which factor into whether or not they will go in for behavioural action [45, 46]. These past experiences are impacting expectations, reinforcement and expectancies, all of which shape individuals’ behaviour, as well as reasons why they engage in that behaviour. The main construct of this theory is “selfefficacy belief” which is defined as an individual’s judgement of self-capabilities to plan and carry out the steps necessary to achieve a goal. A higher level of self-efficacy indicates a greater intention to complete a task [45, 46]. Likewise, few recent online privacy studies adopted this theory. One good example of studies that utilized the social cognitive theory in online privacy was the study conducted by [47]. The purpose of the study was to compare how users in China and Australia cross-culturally adapted services of cloud computing. The study noted that for many users to adopt cloud computing technological innovation, a better understanding of consumer adoption issues for online retailers is required [47]. Furthermore, the study noted that an increase in the amount of cloud computing interest calls for research to determine the reasons behind the cloud computing purchases. As a result, the study adopted the social cognitive theory with the combination of TAM to investigate those factors [17]. Privacy Calculus Theory This theory implies that persons consider future reactions. When dealing with the communication of personal information, people engage in an autonomous risk-benefit analysis or trade-off [48]. It is indicated by this theory that one’s willingness to share personal information depends on calculus behaviour, namely “privacy calculus.” In the privacy calculus, customers undertake a risk-benefit analysis and decide whether or not to divulge information depending on the net outcomes [48]. Studies such as [49] adopted the privacy calculus theory. This study aimed to analyse the factors influencing the intention to disclose information at the risk of a privacy breach. The study was conducted in the perspective of privacy calculus in IoT services [49]. Another example of a study that adopted the privacy calculus theory is the study conducted by [50]. This study was conducted from the perspective of a privacy calculus with the intention of examining the influence of social and institutional privacy concerns on user engagement of the long-term with social media-enabled apps. The motivation behind the study was due to that most mobile apps privacy-focused studies on initial adoption, ignore long-term behavioural outcomes [50]. Protection Motivation Theory This theory was originally developed to assist in comprehending individual responses to fear appeals [51]. It suggests that humans protect themselves depending on two factors: coping appraisal and threat appraisal. Threat appraisal evaluates the seriousness of the circumstance and how serious it is, whereas coping appraisal evaluates how one responds
A Review of Theories Utilized in Understanding Online Information
61
to the situation [51, 52]. PMT is a model that describes why individuals participate in unhealthy behaviours and makes recommendations for how to change them. In online information privacy, [53] conducted a study to “examine privacy settings on online social networks” in the perspective of a protection motivation theory. This study was conducted after [53] they noted that the importance of ‘privacy settings’ as a frontline of defence against information misuse has grown due to the easy accessibility of information on online social networks (OSN) such as Facebook. Following [53, 54] adopted the same theory to examine how it can be used in the nudges design to improve security behaviour in online. Table 1 provides the limitations and criticisms of theories reviewed in this study, as well as examples of studies that noticed those limitations and criticisms. Table 1. Summary of reviewed theories Theory
Limitations
Criticisms
TRA [55, 56]
takes a cognitive approach to attitudes implicitly
theory is not falsifiable
TPB [56]
takes a cognitive approach to attitudes implicitly
By nature, latent variables that are thought to mediate behaviour are not observable and must be quantified
Hofstede’s cultural Instead of comparing individuals, Utilizing a survey to conduct dimensions theory [57] it focuses on a country’s or cultural research society’s core tendencies TAM [58]
Client behaviour is a characteristic that must be measured using subjective measures such as behavioural intention and interpersonal influence
Disputed heuristic usefulness, limited explanatory and foretelling capabilities, minor critique, and lack of practical utility
UTAUT2 [59- 60]
One of the constructs in the theory, trust, is missing
Due to a large number of constructs, this is a very complicated model. Moderators increase model complexity while simultaneously increasing explanatory power
Agency theory [61]
Describes behaviour where the main objectives are aligned, however, perceived goals are not the same
Managers or agents in the corporation have an “opportunistic vie (continued)
62
W. R. Malatji et al. Table 1. (continued)
Theory
Limitations
Criticisms
Social contract theory [36]
Conflicting norms/duties and overlapping spaces
harmful to the state’s safety and security
Social presence theory [62]
Focuses only on the way beings perceive their presence
Oversimplified views of social influence on individuals
Social response theory [63]
The theory assumes that the environmental changes, according to the belief, will immediately lead to changes in people
To define conduct purely in terms of nature or nurture is restrictive, and attempts to do so undervalue the complexity of human behaviour
Social cognitive theory minimize emotional responses [64] and ignore biological differences and Hormonal Response
It places too much emphasis on the situation and not enough on a person’s inner characteristics
Privacy calculus theory It explicitly focuses on the [65] individual
The psychological mechanism that underpins the trade-off is frequently viewed as a rational and deliberate decision-making process
Protection motivation theory [66]
The most effective behaviour modification comes from strong fear appeals and high-efficacy messages
Focuses too much on behaviour and attitudes
4 Challenges and Open Issues in Online Information Privacy This section provides open issues in online information privacy that were noted by [67]. • Hidden cookies: Cookies are small data files that are saved on an individual’s computer when he/she visits the website. Sites use this information to personalize your browsing history and remember your device the next time you access that particular page. The individual’s information is ending up in the hands of the third party and might use it anyway they like without the user’s approval. • Unsecured location sharing: An individual’s cell phone is the first spying device because is used to post and share information on social network sites that reveal sources. Many people today use a variety of location-bearing devices such as smarter watches, Google Glasses, or car trackers to show their whereabouts. • Data never forget individuals: Tagging and posting pictures online might feel fun. However, it aids in the creation of a facial recognition database, making it increasingly difficult for anyone to avoid detection. One of the world’s largest facial recognition databases is created by social networking sites, and the massive quantity of images uploaded to these sites is one of the most serious online privacy concerns surrounding this technology.
A Review of Theories Utilized in Understanding Online Information
63
5 Study Limitations and Future Implications Similar to other studies, this study also has numerous limitations. The first limitation is that it focused on well-established theories which were empirically put to test in the literature only, leaving out additional key references. Other studies that were carried out recently have identified new theoretical foundations in this field that aid in the analysis, explanations, as well as, prediction of online privacy behaviours. Secondly, the study solely considers theories of privacy at the “individual level”, ignoring the organizational, social and other levels of privacy conceptions. The study’s third limitation is that it focuses solely on privacy risks when resolving uncertainties in online information privacy. Additional risk considerations, security concerns, for example, have been identified in the literature and can influence risk assessment and the process of consumers. Likewise, this research as well has suggestions for future work and practice. In terms of research, it identifies various theories in this field, all of which deal with privacy behaviour, implying that studies in the future ought to combine theories to better comprehend the behaviour. In addition to acknowledging the current theoretical foundation, the study also identifies a number of unexplored areas that require further investigation. Firstly, security has not been sufficiently addressed in the privacy literature. The study recommends that this variable be included in future work, particularly in surveys on the utilization of the website when people are unsure whether their personal information would be secured or not. Future studies should be performed to identify or develop extra benefits for customers, as studies demonstrate that people are willing to overlook privacy concerns in exchange for benefits in specific circumstances. More theories are required, according to this, to provide a clear definition of online users’ benefit preferences, as well as to allow effective benefit design. The review has practical implications as well. It demonstrates that online businesses must take various steps to address users’ privacy concerns. These incorporate procedural fairness which is achieved through FIP and privacy policies, social presence which is achieved through better communication channel design, social responsiveness which is achieved by means of building relationship and social contract which is achieved by means of building trust. Customers’ privacy concerns can be efficiently alleviated by these initiatives, which are interconnected and mutually beneficial. As a result, online businesses should make these investments to meet consumers’ privacy concerns. The study emphasizes the necessity of information boundary management and coping methods for customers, as both are necessary antecedents to online information privacy. Individuals’ self-protective habits have a significant impact on their privacy, especially in nations where personal information is not universally protected. As a result, assisting people in developing proper coping skills for dealing with privacy concerns or issues would have a big impact on internet society.
6 Conclusion In this review, twelve established theories were illustrated and how they were used in online information privacy. How these theories are related to each other was noted and summarized. The study indicated gaps in the existing literature on online information
64
W. R. Malatji et al.
privacy, such as, inconsistency issue of research findings in the field of online information privacy, inadequate research on privacy issues, lack of studies that explained why women and men use information technology differently, the existing literature focused only on trust and perceived risk in privacy concerns, there is no universal consensus on the privacy definition even though similarities are present, there were only a few studies that have focused their research on security and privacy-related issues in cloud storage etc. The theories’ limitations and criticisms were indicated in this study. Even though studies regarding information privacy online were undertaken for more than a decade, there exist many undiscovered areas that need to be explored. This research serves as a platform for more theoretical work in this discipline. Acknowledgement. This manuscript was prepared by William Ratjeana Malatji. The author would like to thank Tranos Zuva and Rene VanEck for helping with the original draft preparation, as well as the arrangement of the paper.
References 1. Li, Y.: Theories in online information privacy research: a critical review and an integrated framework. Decis. Support Syst. 54(1), 471–481 (2012) 2. Smith, H.J., Milberg, S.J., Burke, S.J.: Information privacy: measuring individuals’ concerns about organizational practices. MIS Q. 20, 167–196 (1996) 3. Da Veiga, A., Ophoff, J.: Concern for information privacy: a cross-nation study of the United Kingdom and South Africa. In: Clarke, N., Furnell, S. (eds.) HAISA 2020. IAICT, vol. 593, pp. 16–29. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-57404-8_2 4. Fishbein, M.: A behavior theory approach to the relations between beliefs about an object and the attitude toward the object. In: Mathematical Models in Marketing. Lecture Notes in Economics and Mathematical Systems, vol. 132. Springer, Heidelberg (1976). https://doi.org/ 10.1007/978-3-642-51565-1_25 5. Yu, L., Li, H., He, W., Wang, F.K., Jiao, S.: A meta-analysis to explore privacy cognition and information disclosure of internet users. Int. J. Inf. Manage. 51, 102015 (2020) 6. Wang, Y., Herrando, C.: Does privacy assurance on social commerce sites matter to millennials? Int. J. Inf. Manage. 44, 164–177 (2019) 7. Lin, X., Wang, X.: Examining gender differences in people’s information-sharing decisions on social networking sites. Int. J. Inf. Manage. 50, 45–56 (2020) 8. Ajzen, I.: The theory of planned behaviour. Organ. Behav. Hum. Decis. Process. 50, 179–211 (1991) 9. Fortes, N., Rita, P.: Privacy concerns and online purchasing behaviour: towards an integrated model. Eur. Res. Manag. Bus. Econ. 22(3), 167–176 (2016) 10. Al-Jabri, I.M., Eid, M.I., Abed, A.: The willingness to disclose personal information: trade-off between privacy concerns and benefits. Inf. Comput. Secur. 28, 161–181 (2019) 11. Ha, N.: The impact of perceived risk on consumers’ online shopping intention: an integration of TAM and TPB. Manage. Sci. Lett. 10(9), 2029–2036 (2020) 12. Hofstede, G.: Culture’s Consequences: International Differences in Work-Related Values. Sage Publications (1984) 13. Cockcroft, S., Rekker, S.: The relationship between culture and information privacy policy. Electron. Mark. 26(1), 55–72 (2015). https://doi.org/10.1007/s12525-015-0195-9
A Review of Theories Utilized in Understanding Online Information
65
14. Widjaja, A.E., Chen, J.V., Sukoco, B.M., Ha, Q.A.: Understanding users’ willingness to put their personal information on the personal cloud-based storage applications: an empirical study. Comput. Hum. Behav. 91, 167–185 (2019) 15. Davis, F., Bagozzi, R., Warshaw, P.: User acceptance of computer technology: a comparison of two theoretical models. Manage. Sci. 35, 982–1003 (1989) 16. Venkatesh, V., Morris, M., Davis, G., Davis, F.: User Acceptance of information technology: toward a unified view. MIS Q. 27, 425–478 (2003) 17. Venkatesh, V., Morris, M.G., Davis, G.B., Davis, F.D.: User acceptance of information technology: toward a unified view. MIS Q 27(3), 425–478 (2003) 18. Chao, C.M.: Factors determining the behavioural intention to use mobile learning: an application and extension of the UTAUT model. Front. Psychol. 10, 1652 (2019) 19. Bu, F., Wang, N., Jiang, B., Jiang, Q.: Motivating information system engineers’ acceptance of privacy by design in China: an extended UTAUT model. Int. J. Inf. Manage. 60, 102358 (2021) 20. Mitnick, B.M.: Agency Theory. Wiley Encyclopedia of Management, pp. 1–6 (2015) 21. Peslak, A.: An ethical exploration of privacy and radio frequency identification. J. Bus. Ethics 59, 327–345 (2005) 22. Milne, G.R., Gordon, M.E.: Direct mail privacy-efficiency trade-offs within an implied social contract framework. J. Public Policy Mark. 12(2), 206–215 (1993) 23. Donaldson, T., Dunfee, T.W.: Toward a unified conception of business ethics: integrative social contracts theory. In Corporate Social Responsibility, pp. 175–207. Taylor and Francis (2017) 24. Gurung, A., Raja, M.: Online privacy and security concerns of consumers. Inf. Comput. Secur. 24, 348–371 (2016) 25. Capistrano, E.P.S., Chen, J.V.: Information privacy policies: the effects of policy characteristics and online experience. Comput. Stand. Interfaces 42, 24–31 (2015) 26. Martin, K.: Understanding privacy online: development of a social contract approach to privacy. J. Bus. Ethics 137(3), 551–569 (2016) 27. Biocca, F., Harms, C., Burgoon, J.: Towards a more robust theory and measure of social presence: review and suggested criteria. Presence 12, 456–480 (2003) 28. Cui, G., Lockee, B., Meng, C.: Building modern online social presence: a review of social presence theory and its instructional design implications for future trends. Educ. Inf. Technol. 18(4), 661–685 (2013) 29. Zhang, L., McDowell, W.C.: Am i really at risk? Determinants of online users’ intentions to use strong passwords. J. Internet Commer. 8(3–4), 180–197 (2009) 30. Kaushik, K., Kumar Jain, N., Kumar Singh, A.: Antecedents and outcomes of information privacy concerns: role of subjective norm and social presence. Electron. Commer. Res. Appl. 32, 57–68 (2018) 31. Ng, M., Coopamootoo, K.P., Toreini, E., Aitken, M., Elliot, K., Van Moorsel, A.: Simulating the effects of social presence on trust, privacy concerns & usage intentions in automated bots for finance. In: 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), pp. 190–199. IEEE (2020) 32. Huang, J.W., Lin, C.P.: To stick or not to stick: the social response theory in the development of continuance intention from organizational cross-level perspective. Comput. Hum. Behav. 27(5), 1963–1973 (2011) 33. Ananthula, S., Abuzaghleh, O., Alla, N.B., Chaganti, S., Kaja, P., Mogilineedi, D.: Measuring privacy in online social networks. Int. J. Secur. 4(2), 1–9 (2015) 34. Siallagan, H., Rohman, A., Januarti, I., Din, M.: The effect of professional commitment, attitude, subjective norms and perceived behaviour control on whistle blowing intention. Int. J. Civ. Eng. Technol. 8(8), 508–519 (2017)
66
W. R. Malatji et al.
35. Pensa, R.G., Blasi, G.D., Bioglio, L.: Network-aware privacy risk estimation in online social networks. Soc. Netw. Anal. Min. 9(1), 1–15 (2019). https://doi.org/10.1007/s13278-0190558-x 36. Luszczynska, A., Schwarzer, R.: Social cognitive theory, pp. 51–225. Fac Health Sci Publ (2015) 37. Bandura, A.: Social Foundations of Thought and Action: A Social Cognitive Theory, p. xiii, 617. Englewood Cliffs, NJ, US: Prentice-Hall, Inc. (1986) 38. Ratten, V.: A cross-cultural comparison of online behavioural advertising knowledge, online privacy concerns and social networking using the technology acceptance model and social cognitive theory. J. Sci. Technol. Policy Manage. 6, 25–36 (2015) 39. Mini, T.: Privacy Calculus (2017) 40. Kim, D., Park, K., Park, Y., Ahn, J.H.: Willingness to provide personal information: perspective of privacy calculus in IoT services. Comput. Hum. Behav. 92, 273–281 (2019) 41. Jozani, M., Ayaburi, E., Ko, M., Choo, K.K.R.: Privacy concerns and benefits of engagement with social media-enabled apps: a privacy calculus perspective. Comput. Hum. Behav. 107, 106–260 (2020) 42. Rogers, R.W.: A protection motivation theory of fear appeals and attitude change1. J. Psychol. 91(1), 93–114 (1975) 43. Rogers, R., Cacioppo, J., Petty, R.: Cognitive and physiological processes in fear appeals and attitude change: a revised theory of protection motivation, pp. 153–177 (1983) 44. Stern, T., Kumar, N.: Examining privacy settings on online social networks: a protection motivation perspective. Int. J. Electron. Bus. 13(2–3), 244–272 (2017) 45. Van Bavel, R., et al.: Using protection motivation theory in the design of nudges to improve online security behaviour. Int. J. Hum. Comput. Stud. 123, 29–39 (2019) 46. Manstead, A.S.R.: The benefits of a critical stance: a reflection on past papers on the theories of reasoned action and planned behaviour. Br. J. Soc. Psychol. 50(3), 366–373 (2011) 47. Hackman, C.L., Knowlden, A.P.: Theory of reasoned action and theory of planned behaviourbased dietary interventions in adolescents and young adults: a systematic review. Adolesc. Health Med. Ther. 5, 101 (2014) 48. Taras, V.: Cultural dimensions, Hofstede. In: The International Encyclopaedia of Intercultural Communication, pp. 1–5 (2017) 49. Malatji, W.R., Eck, V., Zuva, T.: Understanding the usage, modifications, limitations and criticisms of technology acceptance model (TAM). Adv. Sci. Technol. Eng. Syst. J. 5(6), 113–117 (2020) 50. Kiwanuka, A.: Acceptance process: the missing link between UTAUT and diffusion of innovation theory. Am. J. Inf. Syst. 3(2), 40–44 (2015) 51. Shachak, A., Kuziemsky, C., Petersen, C.: Beyond TAM and UTAUT: future directions for HIT implementation research. J. Biomed. Inform. 100, 103–315 (2019) 52. Panda, B., Leepsa, N.: Agency theory: review of theory and evidence on problems and perspectives. Indian J. Corp. Gov. 10(1), 74–95 (2017) 53. Osei-Frimpong, K., Mclean, G.: Examining online social brand engagement: a social presence theory perspective. Technol. Forecast. Soc. Chang. 128, 10–21 (2018) 54. Rusch, T., Lowry, P.B., Mair, P., Treiblmaier, H.: Breaking free from the limitations of classical test theory: developing and measuring information systems scales using item response theory. Inf. Manage. 54(2), 189–203 (2017) 55. Mundy, P.: A review of joint attention and social-cognitive brain systems in typical development and autism spectrum disorder. Eur. J. Neurosci. 47(6), 497–514 (2018)
A Review of Theories Utilized in Understanding Online Information
67
56. Dienlin, T., Metzger, M.J.: An extended privacy calculus model for SNSs: analyzing selfdisclosure and self-withdrawal in a representative US sample. J. Comput. Mediat. Commun. 21(5), 368–383 (2016) 57. Fong, C.J., et al.: When feedback signals failure but offers hope for improvement: a process model of constructive criticism. Thinking Skills Creativity 30, 42–53 (2018)
Application of the Method of Planning Projects in Smart Cities Michaela Kollárová(B) University of Žilina, Univerzitná 8215/1, 010 26 Žilina, Slovak Republic [email protected]
Abstract. The aim of the paper is to plan a project from the modification of the application to the integration of a smart parking solution for 300 parking spaces in the center of Stupava. In recent years, cities have been under increasing pressure from digitalization and the associated process of implementing information and communication technologies into mainstream activities (education, healthcare, communication and services to citizens, safe city, smart transport, smart energy, etc.). The implementation of smart solutions on the territory of the city becomes part of urban strategies and concepts (e.g. in the form of economic and social development programs, etc.) or the upcoming concepts of smart cities. A smart city is a complex ecosystem consisting of several action elements such as city dwellers, smart transport, smart energy, information and communication technologies, smart environment, local government, etc. In recent years, we have seen a radical increase in the density of these elements in cities. According to the United Nations, almost 70% of the population will live in cities, and therefore the strategic decisions of the city will be very important when planning and implementing smart solutions to build a safe smart city. A distinctive feature of the present time is the amount of high-quality and timely information that the city can obtain in order to respond correctly and decide. This data is becoming more and more accessible as digitalization increases. The result of the contribution is the determination of the critical path of the smart city project – Stupava. Keywords: City Security · Information · Smart City
1 Introduction Project planning is an activity carried out at the beginning of project management, which greatly affects the output of the entire project. Project planning means defining the outcome of the project and listing in detail everything that is needed to achieve the desired result in the specified quality, while respecting the timetable and budget. There are many different methods used in planning. One large set consists of methods based on network diagrams or network charts. There are many different modifications of these methods that can be used in any case. Among the most famous are the CPM and PERT method. The CPM - critical path method was first applied in 1950. Currently, it is commonly used in all types of projects, e.g. construction, software development, new product development or research projects. The CPM method uses node-defined network charts. The © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 68–77, 2023. https://doi.org/10.1007/978-3-031-35311-6_8
Application of the Method of Planning Projects in Smart Cities
69
nodes of the network graph are conjunctive-deterministic, i.e. each node is realized only when the realization of all activities that enter it (the so-called conjunctive input) is completed and the realization of any node provokes the realization of all actions that exit from the node (the so-called deterministic output). The CPM method solves time analysis, i.e. it pursues two basic objectives – the determination of the critical path and the determination of time reserves. The starting point for time analysis is activities (i, j) and their duration. It is also possible to enter a planned date for the start or completion of the project. From the specified values, the first possible and at the latest permissible deadlines are calculated for all activities of the chart and for all nodes of the chart. From the results of these calculations, time reserves and the critical path are further determined [1]. The PERT (Program Evaluation and Review Technique) method is a program evaluation and control method that originated at about the same time as the CPM method. This method of network analysis, like the CPM method, is used for time analysis of the project and uses edge-oriented graphs with a deterministic structure. But the difference occurs in the time valuation of the network graph, this is stochastic in this case and therefore the PERT method is one of the stochastic methods. The duration of each activity is understood here as a random quantity with a certain probability distribution. Empirically, it has been established that in practice this division is best described by the so-called beta distribution, which expresses well the variability of conditions. This is a distribution defined on a closed interval. Its mean value and variance can be determined from three values, which in the case of the PERT method are usually obtained as expert estimates. These three estimates values represent three data on the duration of the activity, namely: • optimistic estimation – represents the shortest possible duration of the activity, assuming that no possible problems arise (there is no need to repeat anything, every first attempt succeeds), • most likely estimate (modus, modus) – the most likely value of the duration of the activity, under normal conditions (with the most likely occurrence of disturbances and delays), • pessimistic estimation – represents the longest possible duration of the activity, assuming extremely unfavorable conditions for the implementation of the activity, it is the longest expected duration of the activity. Estimates of the duration of activities shall be made by professionals who have experience of the activity in question and know the conditions under which it will be carried out. When determining estimates, only those influences that can be classified as random phenomena are considered – the impact of qualifications, performance, the influence of work organization, the influence of weather in outdoor work, failure rates, etc.
2 Methods When planning a smart parking project in smart cities, the CPM and PERT methods are used. The design of the activities, the dependence of actions, as well as the time required, were consulted with a private company that has in its portfolio the construction
70
M. Kollárová
of smart parking. Given the complexity of the problem, we will proceed from simplified assumptions. The aim of the assignment was to plan the project from the modification of the application to the integration of a smart parking solution for 300 parking spaces in the center of Stupava. According to the company’s estimates, the implementation of such a project would take 173 days, given that this is not only a software modification, but also the installation of hardware devices in parking spaces. When calculating, we abstracted from weather conditions, as well as the technical condition of parking spaces and other conditions that affect the installation. For the CPM method, it is necessary to follow the following chronological steps: a) b) c) d) e)
Identify individual activities, Identify relationships between activities, Determine the time required for each activity, Develop a network chart of individual activities, Determine the feasible times of activities (times of the first possible beginnings of activities, at the latest of possible beginnings of activities, time reserves – total and free). f) Identify the critical path (minimum time to complete the project).
To determine the critical path of a smart city project using the PERT method, the data used in this method are based on an interview with the company’s project team. The average time score is calculated according to the formula. a = [
o+4∗n+p ] 6
where a = average time, o = optimistic estimate, n = most likely estimate, p = pessimistic estimate. The variance and standard deviation are calculated as follows: S =
(p − o) and R = S ∗ S 6
where S = standard deviation and R = scattering.
3 Results and Discussion When using the CPM method, the description for individual activities, the relationships and their times and their difficulty times, as shown in Table 1, are first to be used. For the implementation of the solution in a smart city, it is necessary to construct a network graph, which is shown in Fig. 1. We divide each vertex into 3 separate fields for the customary marking of nodes. Subsequently, the landing is fixed to solve the following steps: a) determination of the first possible beginnings of TM I activities In the calculation, the relationship TM j = TM i + tij was used. TM j = The fastest possible completion of the activity, at the latest the possible start of the activity at the j-th peak.
Application of the Method of Planning Projects in Smart Cities
71
Table 1. Activities for the implementation of the solution in a smart city.
N. 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Name of activity Requirements analysis SpecificaƟon SoŌware architecture Class design Database design Graphic design ImplementaƟon of graphic interface Database implementaƟon ImplementaƟon of park dots Test of park dots IntegraƟon InstallaƟon HW and SW TesƟng Launch system
Code A B C D E F G H I J K L M N
Time consp. previous act. (days) 3 5 10 12 6 12 10 17 20 25 18 15 30 10
A B C C E D G F and H F and H I F J and K L
Fig. 1. Network graph
TM i = First, the possible beginning of activities emanating from node i. tij = time of activity. This relationship is based on a simple rule that reads that all activities from the node emerging can only begin after all the activities entering the node have ended [1]. Calculation: TM1 = 0. TM2 = 0 + 3 = 3. TM3 = 3 + 5 = 8. TM13 = 125. The results of the calculation are shown in Fig. 2. b) determination of the possible beginnings of TPi activities at the latest When determining the latest possible beginnings of activities in individual nodes, we start from the peak of the end, in which we assign the time of the latest possible
72
M. Kollárová
Fig. 2. Determination of the possible beginnings of activities first
beginning, i.e. the duration of the action. TPk = Tn. Calculation: TP13 = 125. TP12 = 125 − 30 = 95. TP1 = 0. The results of the calculation are shown in Fig. 3.
Fig. 3. Determination of the possible beginnings of activities at the latest
In the next step, an analysis of the reserves was carried out, i.e. the search for total reserves and reserves free. A total reserve is the amount of time in which an activity end time can be deferred without affecting the fastest completion time for the project as a whole. Its value is always non-negative and indicates by how many time units the duration can be extended or the time of the first possible start of the activity can be shifted without compromising the deadline of the entire project. This step helps you find out the longest project implementation. The critical path has a total margin equal to zero. RC = TPj − TMi − tij. The free margin indicates by how many time units the duration of a project can be extended or the time of the first possible start of activities postponed, so as not to change
Application of the Method of Planning Projects in Smart Cities
73
the possible beginnings of all immediately following activities first. This means that the reserves of the following activities must not be affected when drawing on the free reserve of activities (Table 2). RV = TMj − TMi − tij.
Table 2. Calculation of the total and free margin
Subsequently, it is necessary to mark, the so-called critical path, which connects the peaks in which TPi = TMi applies. The critical path runs through nodes in which the values of TM and TP are the same, i.e. the critical margin in these nodes is zero. In my case, the critical path passes through the peaks of A-B-C-D-G-H-I-K-M. The critical journey takes 125 days, i.e. minimum completion time for the entire project. The result is a link to the representation in red in Fig. 4. The calculation of the critical path using the PERT method is captured in Table 3, where the results of the calculation of variance and standard deviation are given, and in Fig. 5 is a representation of the critical path of the PERT method. The critical path runs through the peaks of A-B-C-D-G-H-I-K-M. The critical journey takes 129 days. In the CPM method, the critical journey took 125 days. The difference between the methods is 4 days. This is because the CPM method has a definitive estimate of duration, while the PERT method has three estimation estimates. The PERT method also allows you to perform a certain true dissimilarity analysis of the project, namely, it allows you to determine the probability that the project will be completed at time T, which will not exceed the planned and project completion time Tp. As with any stochastic quantity, which is the duration of a project T using the PERT
74
M. Kollárová
Fig. 4. Determining the critical path of a smart city project
Table 3. Calculation of the critical path by the PERT method Trvanie
Ak vita
O N
Najskôr možný počiatok
a
P
P-O
S
R
TMi
TMj
Najneskôr možný počiatok
TPi
TPj
A
2
3
4
3,00
2 0,333 0,111
0,000
3,000
0,000
B
3
5
8
5,17
5 0,833 0,694
3,000
8,167
3,000
8,167
C
10 10
15 10,83
5 0,833 0,694
8,167
19,000
8,167
19,000
D
10 12
17 12,50
E
5
F
3,000
7 1,167 1,361
19,000
31,500
19,000
31,500
6,17
3 0,500 0,250
19,000
25,167
19,000
45,833
12 12
18 13,00
6 1,000 1,000
25,167
38,167
45,833
58,833
G
8
10
13 10,17
5 0,833 0,694
31,500
41,667
31,500
41,667
H
15 17
20 17,17
5 0,833 0,694
41,667
58,833
41,667
58,833
I
20 20
25 20,83
5 0,833 0,694
58,833
79,667
58,833
79,667
J
25 25
35 26,67
10 1,667 2,778
58,833
97,167
58,833
97,167
K
15 18
18 17,50
3 0,500 0,250
79,667
97,167
79,667
97,167
6
8
L
13 15
15 14,67
2 0,333 0,111
38,167
52,833
58,833
118,333
M
30 30
40 31,67
10 1,667 2,778
97,167
128,833
97,167
128,833
N
9
14 10,50
5 0,833 0,694
52,833
128,833
118,333
128,833
10
Fig. 5. Determination of the critical path by the PERT method
Application of the Method of Planning Projects in Smart Cities
75
method, there is a likelihood that its value will be smaller or larger (the project will be shorter/longer). Verification of probability calculation, i.e. what is the probability that the project will be completed in 125 days, I verified through the online program. The result is shown in Fig. 6.
Fig. 6. Verification of the likelihood of project completion for the 125-day duration of the project through AtoZmath.com
76
M. Kollárová Table 4. Calculation of probability for time estimates
Tp 125 130 135 173
T 128,83 128,83 128,83 128,83
(Tp-T) -3,833333333 1,166666667 6,166666667 44,16666667
R^1/2 2,82 2,82 2,82 2,82
F()
P%
-1,358 0,413 2,184 15,642
8,7 65,9 98,5 100
If the probability value is less than 25%, the completion date is at risk and measures must be taken to better ensure all work on the way to the node. Probability values from 25% to 60% indicate good job security, compliance is realistic. Values above 60% indicate an unnecessary waste of resources and a real possibility of shortening the deadline (Table 4).
4 Conclusion The aim was to describe project planning methods using network graphs, demonstrate the application of selected methods on a specific project, and then compare the methods. In the literature on project management, one can meet a description of various planning methods by network charts, most often these are CPM and PERT methods. At the same time, the CPM method is the main representative of methods using edge-defined network charts. The PERT method is mostly used only as a complement to deterministic methods, for determining the durations of activities, in the absence of sufficient basis for the direct determination of duration. For this illustrative example, the company estimated the implementation time at 173, however, according to the CPM and PERT methods, the execution time is 48 days (CPM) and 44 days (PERT) shorter. The use of critical path analysis in management processes allows managers to focus attention on the main activities, forces them to logically compile operations in projects, makes it easier for them to manage the division of labor, allows them to draw up work schedules for individual workplaces following the overall project, or establish methodologies for the activities of workplaces and individuals. A prerequisite for the successful application of critical path analysis is a system with a sufficient level of organization, discipline and a supply of resources capable of implementing the planned dates of operations. Analysis of the critical path gives the controller a picture of activities that largely time limit the completion of the task as a whole, and is one of the methods of finding the main link. In shortening activities that lie on a critical path, the process as a whole can be optimized by eliminating time reserves and downtime, making perfect use of labor forces, machinery and equipment.
References 1. Máca, J., Leitner, B.: Operational analysis I, Deterministic methods of operational analysis. FŠI ŽU in Žilina Author
Application of the Method of Planning Projects in Smart Cities
77
2. Hudáková, M.: Management methods and techniques. Edis – ŽU Publishing House, Žilina (2011) 3. Leitner, B.: Multicriterial (multicriterial) decision-making (decision-making analysis). FBI ŽU in Žilina 4. Šimák, L., et al.: Terminological dictionary of crisis management. FŠI ŽU in Žilina, 44 p., ISBN 80-88829-75-5 5. Kašpar, V.: Selected methods of operational analysis in military transport and military construction. FŠI ŽU in Žilina 6. Mika, V.T.: Management. Introduction to the management of the organization in conditions of risk and in crisis situations. EDIS- ŽU Publishing House in Žilina, 178 p. ISBN 978-80-5540760-9 7. Zukhovkij, S., I., Radchik, I. A..: Mathematical Methods of Network Analysis. 2nd ed. Praha: SNTL, 1973. 291 pp. ISBN 04-005-73 8. Safe Cities Index 2019, Urban security and resilience in an interconnected world. https://saf ecities.economist.com/wp-content/uploads/2019/08/Aug-5-ENG-NEC-Safe-Cities-2019-270 x210-19-screen.pdf 9. AtoZmath. Operational research calculator. https://cbom.atozmath.com/CBOM/PertCPM. aspx
The Concept of a Decision Support System in the Management of Treatment and Accompaniment of the Patient with Bronchopulmonary Diseases D. R. Bogdanova1(B) , N. I. Yusupova1 , and RKh Zulkarneev2 1 Ufa University of Science and Technology, Ufa, Russia
[email protected] 2 Bashkortostan State Medical University, Ufa, Russia
Abstract. The article describes the development of the concept of Decision Support System (DSS) in the management of treatment and accompaniment of patients with bronchopulmonary diseases. The main purpose of DSS is to improve the efficiency of processing and use of medical data for the transition to personalized medicine and high-tech health care using advanced information technology methods on the example of respiratory diseases. The proposed concept of DSS in the treatment and accompaniment of patients with bronchopulmonary diseases differs by the integration in the implemented modules of the system of semantic analysis of semi-structured data of clinical guidelines and case histories of patients; methods of automated recognition of specific functional diagnostic data of patients; knowledge base of the system based on ontological approach for semantic description of the poorly formalized subject area of bronchopulmonary diseases treatment; multi-agent technology in modeling the interaction between a doctor and a patient to form a personalized approach to the trajectory of diagnosis and treatment. The results of analysis of opportunities of artificial intelligence tools for decision support in the process of treatment and accompanying the patient are presented. The mathematical statement of the problem of management of the process of treatment and accompaniment of a patient with bronchopulmonary diseases is formulated. The scheme of decision support in the management of treatment and accompaniment of a patient with bronchopulmonary diseases was developed. At the stage of information system design, system models in IDEF notation were developed for realization of decision support in management of treatment and accompaniment of patient, which serves as a basis for software development of the proposed DSS. #CSOC1120. Keywords: decision support system · treatment and accompaniment of the patient · artificial intelligence technologies · bronchopulmonary diseases
1 Introduction Today there are systems created using computer technology to help the doctor make a decision based on the available symptoms and analyzes - decision support systems (DSS). They are based on methods of data analysis and artificial intelligence in the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 78–89, 2023. https://doi.org/10.1007/978-3-031-35311-6_9
The Concept of a Decision Support System in the Management
79
diagnosis, treatment and maintenance of a patient with bronchopulmonary diseases and more. Such systems make it possible to facilitate the work of employees of medical organizations, reduce the likelihood of an incorrect diagnosis and increase the efficiency of the process of providing medical services. The main purpose of DSS is to increase the efficiency of processing and using medical data for the transition to personalized medicine and high-tech healthcare using advanced information technology methods. This will be presented on the example of respiratory diseases, which were chosen as the object of study due to their high socio-economic significance, the active development of the fundamental research base of pulmonology and the developed structure of international and national clinical guidelines. The emergence of new intelligent technologies necessitates the development of the DSS concept. This will allow to achieve more effective interaction between the doctor and the patient, to form a better trajectory for the disease diagnosis and treatment, to interactively assess the quality of life of the patient, and also to generate recommendations and monitor the dynamics of the patient’s condition using knowledge bases and semantic analysis technologies. This article proposes the concept of a decision support system for managing the treatment and accompaniment of the patient with bronchopulmonary diseases based on the use of special technologies for processing semi-structured and poorly formalized data. Int the first chapter there is a problem description and the problem statement. The second chapter discusses known approaches to the organization of DSS in medicine. The third chapter of the article outlines the proposed approach to the development of DSS in the treatment and maintenance of patients. The fourth chapter provides a mathematical formulation of the managing problem of the process of treatment and accompanying patients. The fifth chapter shows the development of system models of DSS using the IDEF notation. The sixth chapter discusses the results of the implementation of the proposed approach.
2 Problem Description and Problem Statement The development of a concept for building intelligent decision support systems for diagnosing, treating and accompanying a patient using data analysis technologies and artificial intelligence methods is necessary to improve the efficiency of medical data processing, reduce the risk of medical error and improve the patient’s quality of life in order to implement the transition to personalized medicine and high-tech healthcare. A feature of the subject area of bronchopulmonary diseases is the specificity of the analysis of semi-structured data necessary to support decision-making in the diagnosis and treatment. X-ray images, case histories, disease symptoms, clinical recommendations presented in a descriptive form require the use of special technologies for processing semi-structured and poorly formalized data, such as ontological modeling and sematic data analysis. The main objective of the study is to develop a DSS concept based on data analysis methods and artificial intelligence in the diagnosis, treatment and support of a patient with bronchopulmonary diseases, which will allow organizing effective interaction between a doctor and a patient.
80
D. R. Bogdanova et al.
3 Known Approaches to the Organization of Decision Support in Medicine The issues of development, research and implementation of decision support systems in medicine, and in its various aspects, are raised in the following works: The authors [1–4] claim that modern medicine is mastering decision support systems (DSS) by analyzing Russian and foreign sources. Work is underway to identify the main obstacles to the creation of DSS for medicine. Possible approaches to overcome conceptual barriers are also presented and the possibility of a comprehensive solution to the problem is proposed. The author considers a hybrid model of a wide class DSS for medicine. According to the author, the results of his work can be used by IT developers to build a DSS based on the scientific and empirical components of medical knowledge. In the next article [5], the author considers the intellectual support of a doctor at the stage of making diagnostic and therapeutic decisions to reduce the likelihood of errors. The author explains the relevance of the chosen topic by the need to improve the procedures for supporting decision-making by a doctor at the stage of making a preliminary diagnosis. As a method for solving this problem, a model based on association rules was chosen, which is the basis of the mathematical software for the decision support system and allows a conclusion about a preliminary diagnosis with a high degree of certainty. In the article [6], the author substantiates the prospects of creating decision support systems (DSS) in medicine and analyzes the feasibility of mass implementation of DSS, the principle of their operation based on a medical-digital system for human capital management, and also gives examples of clinical DSS in Russia and abroad based on 4-P medicine. In the article [7], the author believes that recently a large number of intelligent systems have begun to appear that are used to support medical decision-making - “artificial intelligence in medicine”. The author comes to the conclusion that in some cases the demonstration of the successful operation of the software in the declared characteristics (sensitivity, specificity) occurs only in the “good hands” of developers on the data that underlie the software. But when demonstrating work in clinical situations, the claimed performance is often not achieved, so the opinion of the clinical community, which should use this AI-based solution, is not always favorable. The author considers various types of errors that can be fatal in making medical clinical decisions. Thus, when developing AI-based solutions, it seems important to take into account the previously mentioned aspects for both developers and users. The article [8] discusses the application of software and information solutions that form a comfortable working environment for the doctor. According to the author, due to the great complexity and insufficient knowledge of diseases, the large amount of constantly updated knowledge, and often limited resources, it is extremely important to help in making decisions using modern computer technologies. The author believes that digital clinical decision support systems can improve the diagnosis and treatment of diseases, reduce the frequency of erroneous and suboptimal decisions, and help individualize therapeutic programs. The article also states that the use of a clinical decision support system is most effective when implemented as a program for mobile devices: this allows the doctor to use tools anywhere and at any time.
The Concept of a Decision Support System in the Management
81
The author in the article [9] considers the actual tasks of healthcare informatization, reveals the role of information technologies in the development of preventive medicine and medical decision support systems. The author believes that information technology in combination with the latest technology will play an important role in the development of preventive medicine, and one of the first places should be given to preventive monitoring systems and decision support systems. Also, according to the author, the healthcare informatization projects of the Russian Federation should include priority areas related to the development and implementation of medical decision support systems as part of automated workstations (AWS) of medical personnel. And the basis of such developments can be modern technologies of data mining. In the article [10], the author, using a large-scale sample of American hospitals located in different medical regions, empirically investigates how the implementation of CDSS by the hospital and neighboring hospitals affects the quality of medical care. The author found that the introduction of CDSS significantly reduces the recurrence rate of patients in the hospital with heart failure, acute myocardial infarction and pneumonia. A regional side effect of implementation has also been identified, where the implementation of CDSS by neighboring hospitals in the same medical region also reduces hospital relapse rates. Such side effects become more significant with electronic data interchange, facilitating smooth and error-free communication between hospitals. The results presented by the author offer theoretical and managerial insights for health care researchers and practitioners. In [11], the author presents a model of a decision support system in medicine when making a diagnosis. As a knowledge base of the expert system, a table of values based on the concepts of “absence / presence” of symptoms is presented. Further, the method of searching for the most probable disease with a custom vector specified in the body of the function was programmatically implemented. In the work, an approach was considered and tested in the case of incompleteness of the input data and the complication of the knowledge base, by introducing a symptom weight coefficient in the symptom complex of diseases. In the article [12], the author says that CDSS can be classified into informationreference and intelligent systems. The latter can be divided into modeling and simulating reasoning systems. Modeling systems are based on formalized knowledge of experts, and simulating systems are based on models built by various methods of multivariate data analysis, including machine learning methods. Based on this, CDSS should be considered as a medical technology. Therefore, after their development, the following stages should follow: assessment of their analytical (technical) validity and clinical validation, during which evidence of the effectiveness of such systems in improving patient outcomes and their safety should be obtained, according to the author. Only after obtaining such evidence can a clinical and economic analysis be carried out in order to substantiate the economic feasibility of using CDSS. In the article [13], the author conducted a brief review of the existing CDSS and showed the relevance of their application for solving the complex problem of choosing the optimal medical equipment. The author showed that the choice of the optimal model of medical equipment will ensure rational technical equipment and re-equipment of
82
D. R. Bogdanova et al.
healthcare institutions, which, according to the author, will ultimately lead to significant cost savings and an increase in the efficiency of the use of medical equipment. The article [14] proposes a multi-stage algorithmic process for creating a CDSS based on a systematic approach. The author built a model for the development of a medical CDSS in the form of an IDEF0 diagram, a structural model of CDSS in relation to identifying the severity of a patient’s condition, and a model for the medical decisionmaking process in the form of a cycle consisting of sequential procedures. The author proposes the integration of statistical packages and databases to create CDSS. In the article [15], the author outlines the features, issues of application and prospects for the development of decision support systems in clinical practice and in the educational process. He also considers the specific characteristics of the clinical problem area that require consideration. In particular, attention was drawn to the effect of self-learning inherent in knowledge-based systems. The author presented the approaches and systems used in the educational process, including: “critical” systems, intelligent simulators and learning environments, including those that operate remotely. Works in the field of application of information technologies (including intellectual ones) for processing medical data are actively carried out in various countries in various areas. However, the methodological basis for the formation of intelligent decision support in the diagnosis, treatment and further support of the patient, which combines a variety of intelligent technologies in a single methodology, has not been developed enough yet. Therefore, this problem is fundamental, and its solution is relevant, scientifically and practically significant.
4 The Proposed Approach to the Development of DSS in the Treatment and Maintenance of a Patient with Bronchopulmonary Diseases The concept of CDSS is proposed, which is based on the methods of data analysis and artificial intelligence in the diagnosis, treatment and follow-up of a patient with bronchopulmonary diseases. It allows you to organize effective interaction between the doctor and the patient, form the best trajectory for diagnosis and treatment, assess the quality of life of the patient, form recommendations and monitor the dynamics of the patient’s condition in the future. Personalization of treatment and patient support provides for automated comprehensive accounting of the patient’s condition, described by semi-structured and poorly formalized information. As part of the information management of a medical organization the problem of decision support is solved when managing the treatment process and accompanying the patient based on a comprehensive consideration of the patient’s condition, including the emotional state during the treatment process. The information management process includes [16]: definition of information needs; collection and creation of information; analysis and interpretation of information; organization and storage of information; access to information and its dissemination; use of information. Below we consider in more detail the model of the decision support process (Fig. 1).
The Concept of a Decision Support System in the Management
83
Fig.1. Scheme of the decision support process.
To increase the efficiency of the treatment process and accompany the patient from the decision maker, the selected solutions are sent to the input of this process, and the results of the process are sent to the CDSS at the output. During treatment, the patient’s condition changes. Information about the patient’s condition is taken into account in the DSS. The DSS includes a user interface, a module for generating recommendations, a module for calculating indicators, a module for working with data, a database, and a knowledge base. Through the user interface, decision maker forms its own requests and receives recommendations for improving the treatment process. The user interface sends requests from the decision maker to the module for generating recommendations, the module for calculating indicators, and the module working with data. The module for generating recommendations, the module for calculating indicators and the module for working with data find the necessary information in the database and in the knowledge base upon the request received, and record there the information obtained as a result of the treatment process. The knowledge base is based on the ontological model and is filled with expert information about the subject area and system states. He also receives information from the DSS about problematic situations in the course of treatment. This concept is distinguished by the use of artificial intelligence technologies semantic analysis of semi-structured data of clinical recommendations and patient histories, methods of automated recognition of specific data of functional diagnostics of patients, an ontological approach to creating a knowledge base of the system for a semantic description of a poorly formalized subject area for the treatment of bronchopulmonary diseases, multi-agent technologies in modeling interaction between the doctor and the patient in the formation of the trajectory of diagnosis and treatment for a personalized treatment approach.
84
D. R. Bogdanova et al.
A feature of the system is the implementation of the following modules: • The patient’s personal profile, which will include objective and subjective indicators of the patient’s condition, and will allow in the future to: analyze personal clinical risks, choose the best diagnostic and treatment trajectory, and develop lifestyle recommendations; • The knowledge base of clinical recommendations based on the ontological approach, determined by the built dependency trees between the words in the sentence, taking into account the use of the sequence of application of the rules (which leads to the maximum convolution of the sentence) and the semantic relationship between the concepts of the sentence (which allows to implement the knowledge extraction method based on the rules from clinical guidelines); • Medical terms dictionary, supplemented with a section with verb predicates and their synonyms, to implement the method of extracting knowledge in the form of rules from clinical recommendations; • A multi-agent system that simulates the interaction of medical personnel with a patient during the formation of a diagnostic and treatment trajectory. It implements formalized target functions of agents and the entire system as a whole, which make it possible to evaluate performance in numerical form and compare various alternative options for diagnostic and treatment trajectories; • Risk analysis of individual and group risks in relation to the stages of the treatment process and its participants. Based on the determination of a priori risks, risks of diagnostic and therapeutic measures, a posteriori risks associated with research results, quality of life of patients. This will allow to assess the significant impact on the results of a personalized approach to treatment; • Mobile application to accompany the patient and control his dynamics.
5 Setting the Task in Managing the Process of Treatment and Accompanying a Patient with Bronchopulmonary Diseases The decision support system in the treatment and maintenance of a patient with bronchopulmonary diseases will allow personalizing the treatment process, organizing effective interaction between the doctor and the patient, and reducing the risk of medical error. In the general case, the mathematical formulation of the problem of managing the treatment process and accompanying a patient with bronchopulmonary diseases can be represented as follows: Let X (t) = (Pi (t), Tk, Yz, Hy) - patient state vector, where Pi (t) - patient parameters, clinical laboratory diagnostics, X-ray images, medical histories, symptoms of the disease presented in a descriptive form that require the use of special technologies for processing semi-structured and poorly formalized data; Tk - requirements for the treatment of bronchopulmonary diseases, described by clinical guidelines, regulating considered treatment process;
The Concept of a Decision Support System in the Management
85
Yz - the level of safety of the treatment process, which is a vector that determines the acceptable values of the risks of the considered treatment process; Hy – patient medical history. The patient’s condition changes due to disturbing influences, external factors. Then the disturbing effect in the treatment process is a vector function. F(t) = {f1 (t), f2 (t),…, fl (t)}. U(t) = {u1 (t), u2 (t),…, um (t)} - control action vector of the treatment process. It is required to find such control actions U for a given state X that ensure the effectiveness of the treatment process in accordance with a given control criterion Q. The minimum risk of medical error can be used as a control criterion.
6 Designing CDSS in the Management of Treatment and Accompaniment of the Patient with Bronchopulmonary Diseases To implement the proposed models, it is necessary to carry out system modeling. The SADT methodology and IDEF notation were chosen as a modeling tools. At the stage of designing a decision support system for managing the treatment and accompaniment of the patient with bronchopulmonary diseases, a functional model of the decision support process for managing the treatment and accompaniment of a patient was developed based on a comprehensive consideration of the patient’s condition. The developed functional model is shown in the Fig. 2.
Fig. 2. Functional model of the decision support process in the management of treatment and patient support.
Decision-making support in the management of treatment and patient accompaniment consists of the following stages: monitoring of indicators of the patient’s condition; analysis of the patient’s condition, assessment of the compliance of the treatment process with the requirements of the patient and evaluation of the effectiveness of treatment; development of an alternative to improve the efficiency of the treatment process based
86
D. R. Bogdanova et al.
on the analyzed indicators of the patient’s condition; evaluation of the developed alternatives and selection of the best of them; development of recommendations to improve the efficiency of the treatment process based on the chosen alternative. Figure 3 shows the proposed information model of the process of providing medical services in the treatment of bronchopulmonary diseases.
Fig. 3. Information model of the process of providing medical services.
The model highlights the main entities involved in the treatment of bronchopulmonary diseases, such as: doctor, patient (which are child entities from the entity Individual), medical record, medical history (which characterize the patient’s disease), procedures prescribed by the doctor, medical and technical restrictions and methods, determining the schedule for the provision of medical services, the criterion for the effectiveness of the schedule, as well as the provision of medical services, treatment rooms, their work schedule and calendar. The given system models formed the basis for the developed software prototypes of the proposed decision support system in the treatment and maintenance of patients with bronchopulmonary diseases.
The Concept of a Decision Support System in the Management
87
7 Implementation of the Proposed Approach to Decision Support in the Treatment and Accompaniment of Patients To test the proposed DSS concept in the management of treatment and follow-up of a patient with bronchopulmonary diseases, software was developed for a decision support system that implements the functions of registering patients and accompanying a doctor’s appointment. Figure 4 shows interfaces of the developed system prototype of treatment and patient support in the developed DSS. The analysis of the developed software confirmed the relevance and necessity of creating a DSS, which will allow to achieve effective interaction between the doctor and the patient, to form the best trajectory for the diagnosis and treatment of diseases, to interactively assess the quality of life of the patient, and with the help of knowledge bases and technologies of semantic analysis to form recommendations and carry out monitoring the dynamics of the patient’s condition.
Fig. 4. Interfaces of the main stages of treatment and patient support in the developed DSS.
8 Conclusion Decision support systems are actively used in many areas, in particular in medicine, as they can also help in saving a person’s life by correctly analyzing the available data about the patient, his disease, methods of its treatment, as well as reduce the risk of medical errors.
88
D. R. Bogdanova et al.
An analysis of the capabilities of artificial intelligence tools for decision making support in the process of treating and accompanying a patient with bronchopulmonary diseases, with a comprehensive consideration of the patient’s condition, showed their applicability for decision support systems in medicine. A mathematical statement of the problem of managing the process of treatment and accompanying a patient with bronchopulmonary diseases was formulated. A landscape of medical organization processes (process map) and a decision support scheme was developed. At the design stage of the information system, a functional decision-making support model was developed for managing treatment and support of the patient, which serves as the basis for developing software for the proposed DSS. The proposed concept of DSS in the treatment and maintenance of patients with bronchopulmonary diseases is distinguished by: the integration in the implemented modules of the system of semantic analysis of semi-structured data of clinical recommendations and patient histories, methods of automated recognition of specific data of the patient functional diagnostics, the knowledge base of the system, which based on the ontological approach for the semantic description of a poorly formalized subject of bronchopulmonary diseases treatment, multi-agent technologies in modeling the interaction between a doctor and a patient to form a personalized approach to the trajectory of diagnosis and treatment. To implement the proposed concept of a decision support system in the management of treatment and support of a patient with bronchopulmonary diseases, at the next stages it is planned to develop models and methods for analyzing data and artificial intelligence necessary for organizing effective interaction between a doctor and a patient, forming the best trajectory for diagnosis and treatment, and assessing the quality of life patient and monitoring the dynamics of the patient’s condition over time. Acknowledgments. The study was supported by the Russian Science Foundation grant 22–1900471.
References 1. Malykh, V.: Decision support systems in medicine. Program Syst. Theor. Appl. 2(41), 155–184 (2019) 2. Kushmeleva, A.: Decision support systems in medicine. In: Berestneva, O.G., Spitsyna, V.V., Trufanov, A.I., Gladkova, T.A. (eds.) Proceedings of the VI International Scientific Conference Information Technologies in Science, Management, Social Sphere and Medicine, pp. 42–44. National Research Tomsk Polytechnic University, Tomsk, Russia (2019) 3. Efimenko, I., Khoroshevsky, V.: Intelligent decision support systems in medicine: a retrospective review of the state of research and development and prospects. In: Proceedings of the International Conference Open semantic technologies for designing intelligent systems, pp. 251–260. Belarusian State University of Informatics and Radioelectronics, Minsk, Belarus (2017) 4. Gusev, A., Zarubina, T.: Support for making medical decisions in medical information systems of a medical organization. Doct. Inf. Technol. 2, 60–72 (2017) 5. Bulgak, D.: Decision support system for the primary diagnosis of a disease. Step Sci. 2, 119–123 (2018)
The Concept of a Decision Support System in the Management
89
6. Kuznetsov, P., Kakorina, E., Almazov, A.: Medical decision support systems based on artificial intelligence - a strategy for the development of personalized medicine for the next stage. Therapist 1, 48–53 (2020) 7. Shaderkin, I.: Weaknesses of artificial intelligence in medicine. Russ. J. Telemedicine EHealth 2, 50–56 (2021) 8. Belyalov, F.: Opportunities and prospects of clinical decision support systems. Clin. Med. 11–12, 602–607 (2021) 9. Duke, V.: Information technologies in biomedical research and healthcare. Clin. Lab. Consultation 6(31), 22–27 (2009) 10. Yongjin, P., Youngsok, B., Juhee, K.: Decision support system and hospital readmission reduction: Evidence from U.S. panel data. Decis. Support Syst. 159, 113816 (2022) 11. Goncharova, A., Sergeeva, E.: Decision support system in medicine for diagnosing diseases. JT SibAK Rubric Inf. Technol. 1(62), 21–25 (2017) 12. Rebrova, O.: Life cycle of medical decision support systems as medical technologies. Doct. Inf. Technol. 1, 27–37 (2020) 13. Frolova, M., Frolov, S., Tolstukhin, I.: Decision support systems for the problems of equipping medical institutions with medical equipment. Questions Mod. Sci. Pract. 52, 106–111 (2014) 14. Simakov, V., Khalafyan, A.: A systematic approach to the development of medical decision support systems. Izvestiya Vysshikh Uchebnykh Zavedenii. North Caucasian region. Technical science 1, 29–36 (2010) 15. Kobrinsky, B.: Decision support systems in healthcare and education. Phys. Inf. Technol. 2, 39–45 (2010) 16. Information Management. Key Concepts in Information and Knowledge Management. https://www.tlu.ee/~sirvir/Information%20and%20Knowledge%20Management/ Key_Concepts_of_IKM/information_management.html. Accessed 03 Nov 2022
Geometric Models of Pursuer Avoidance Dubanov Alexander(B) Banzarov Burayt State University, 24A, Smolin Street, Ulan-Ude 670000, Russia [email protected]
Abstract. This article proposes for consideration geometric models of target evasion from a pursuer moving in a straight line and uniformly. In geometric models, it is assumed that the pursuer has a detection area. The following models of the detection area were considered: circular, sector and angular. The target is initially on the line of sight and tends to leave the place of probable detection along the optimal trajectory. The issues of the minimum distance from the pursuer, when it will still be possible to leave the detection area, are considered. Note that adaptive behavior on the part of the pursuer was not modeled. Based on the results of the article, animated images were made, where you can see the process of evading the target from the pursuer. #CSOC1120. Keywords: trajectory · target · pursuer · evasion · detection area · pursuit
1 Introduction The issues of avoiding the target from the pursuer have been considered in many works, here are some of them [1–5]. In [1], the issues of optimizing the trajectories of targets on the plane and in space were considered. In [2], the problems of planning the trajectory of a moving object at a constant speed on a plane are considered in the case when a risk-threat map is formed by a single sensor. In [3], the speed control law of a moving object (target) was found. The optimal motion trajectory is constructed, the dependence of the accumulated signal on time is compared with other control laws. In [4], the problem of constructing optimal trajectories of a mobile observer performing angular observations of a target moving on a plane in a linear and rectilinear manner is considered. In [5], the problem of planning the optimal trajectory of object evasion from the observer system was investigated. Based on the analysis of these works, the simulation of the target evasion from the pursuer with a circular, sector and angular detection area was performed. The pursuer in the models of the article does not take retaliatory actions in response to the target’s movement and moves uniformly and rectilinearly. At the beginning of the pursuit process, the speed of the pursuer is directed along the line of sight, which means the direction of the target. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 90–100, 2023. https://doi.org/10.1007/978-3-031-35311-6_10
Geometric Models of Pursuer Avoidance
91
It is assumed that if the pursuer retaliates against the target, he will adhere to the chase method. Based on the results of the materials of the article, animated images were made, which show the moments when the target evades the pursuer.
2 Materials and Methods 2.1 Circular Detection Area Consider the pursuer P and the target T on the plane (Fig. 1).
Fig. 1. Target evasion model
We assume that the pursuer P, moving with the velocity vector VP , has a detection area in the form of a circle of radius R. And the target T is located from the pursuer P at a distance LP at the time of the beginning of the pursuit so that the line (PT ) and the vector VP are collinear. It is required to find the trajectory of the target T , the movement of the target along which will take the shortest time to avoid the detection area. To solve this problem, we will switch to a local dynamic coordinate system centered at the location of the pursuer P. The abscissa E1 , in which it is located along the vector VP , and the ordinate axis E2 is perpendicular to VP (Fig. 2).
Fig. 2. The pursuer’s local dynamic basis
92
D. Alexander
In this formulation, the task is similar to the task where it is necessary to swim across a river with a width of RP with a flow velocity of VP (Fig. 2). If the target speed is VT , and its direction is perpendicular to the banks, then the minimum time to cross the river will be: Tmin =
RP . VT
The final velocity will be directed at an angle ϕ to the direction of the target velocity T (Fig. 2): tg(ϕ) =
VP . VT
The target T will cross a distance equal to (l): VT2 + VP2 l = RP · . VT
(1)
Figure 3 shows the case when the target T is at the minimum distance LTmin from the pursuer P, in which the trajectory of the target will not pass through the detection area, provided the velocity vectors are perpendicular.
Fig. 3. Minimum distance to the pursuer.
The line (TQ) of the target trajectory touches the detection circle at point K, while there are two congruent triangles, (TQF) and (PKT ), that is, |TQ| = |PT |. From where we conclude that if |IT | is calculated by the formula (1), then for the value LTmin there is the following expression: VT2 + VP2 LTmin = RP · . VT
Geometric Models of Pursuer Avoidance
93
As mentioned earlier, the target velocity V _T should be directed perpendicular to the line (PT ), then the line (TQ) divides the areas where the trajectory will be noticed by the pursuer or not. Figure 4 shows how the pursuer P with a detection area in the form of a circle of radius RP moves towards the target T . The target T is located arbitrarily on the plane, at the beginning of the pursuit it is stationary. The distance between points P and T is expressed by the formula (1). Figure 4 is supplemented with a link to the animated image [6], which shows the critical deviation with the touch of the detection zone.
Fig. 4. Results of simulation of pursuit with a detection area in the form of a circle
Figure 5 shows the case when the target’s trajectory passes through the pursuer’s detection area of. Figure 5 is supplemented with a link to the animated image [7], showing the passage of the target through the detection zone of the pursuer. 200
150 CircleP( ) 1 Pos T.loc
1
LineP( ) 100 1 Tr T.loc( t )
1
50
0
0
50
100
150
200
CircleP( ) Pos T.loc LineP( ) Tr T.loc( t ) 0 0 0 0
Fig. 5. The target passes through the pursuer’s detection area.
94
D. Alexander
2.2 Sector Detection Area Consider a Problem Where the Detection Area is Bounded not Only by a Circle, but also by an Angle of Magnitude 2α (Fig. 6). In this case, the target T must intersect a section of the plane with a width RT = RP · sin(α). Then the target T will travel a distance l equal to: VT2 + VP2 VT2 + VP2 l = RT · = RP · · sin(α). VT VT
Fig. 6. Local dynamic basis of the pursuer with a sector detection area
Figure 7 shows that the angle α of the detection sector of the pursuer P is greater than the angle ϕ of movement of the target T .
Fig. 7. The angle of the pursuer’s detection sector is greater than the angle of the target’s movement.
Geometric Models of Pursuer Avoidance
95
This figure shows the situation when the trajectory TQ of the target touches the detection sector of the pursuer P. The minimum passage time is: Tmin =
RT . VP
The target T will pass a path TQ of length l equal to: l = RT ·
VT2 + VP2 VT
.
The minimum distance [PT ] between the pursuer and the target at which the trajectory TQ will touch the detection sector of the pursuer, provided that the velocities are perpendicular. The point of contact K is calculated by the formula: ⎤ ⎡ V T 2 2 cos(ϕ) ⎢ V +V ⎥ = RP · ⎣ VT P P ⎦. K = RP · sin(ϕ) 2 2 VT +VP
Figure 8 shows how the target T touches the sector detection area of the pursuer P.
Fig. 8. Sector detection area.
Figure 8 is supplemented with an animated image [8], which depicts the process of persecution when the angle α is greater than the angle ϕ.
96
D. Alexander
Consider the case when the angle α of the pursuer’s detection sector is less than the angle ϕ of the inclination of the target’s movement. Figure 9 shows such a case. The parameters of the angle ϕ are determined by the velocity modules of the pursuer and the target:
Fig. 9. Sector area with a smaller solution angle
It is necessary to construct a tangent line so that it forms an angle ϕ with the ordinate axis E2 . In Fig. 9, this is a straight line (KM ).The point Q bounding the detection sector has coordinates: cos(α) . Q = RP · sin(α) The band width to be overcome has the value RT : The distance QF is equal to:
RT = RP · sin(α).
|QF| = RT · tg(ϕ) = RP · sin(α) ·
VP . VT
The minimum distance LTmin = |PT | at which the target T does not enter into the detection zone of the pursuer, provided the velocities are perpendicular, will be equal to: LTmin = RP · cos(α) + RP · sin(α) ·
VP . VT
The segment QT belonging to the trajectory of the target is obtained by parallel displacement of the tangent segment [KM ] to the alignment with the point Q.
Geometric Models of Pursuer Avoidance
97
Figure 10 shows a computer model in a computer mathematics system. The figure shows the angle of the detection sector α is greater than the angle ϕ. Recall that the magnitude of the angle ϕ is determined by the ratio of the speeds of the pursuer and the target: tg(ϕ) =
VP . VT
Fig. 10. A model of the sector detection area in a computer mathematics system.
Figure 10 is supplemented with an animated image [9], where it will be possible to see how the target T avoids detection by the pursuer P with the detection area in the form of an angular sector 2a bounded by a circle of radius RP .The video shows that the target leaves the band of probable detection at the last moment of time, passing through the Q point. 2.3 Angular Detection Area Consider the pursuit model, where the detection area of the pursuer P is an area in the form of an angle of magnitude 2a (Fig. 11). Let’s form a local dynamic coordinate system (E1 PE2 ), the center of which is located at the location of the pursuer P, the ordinate axis E1 is directed along the velocity vector, and the axis E2 is directed perpendicular to the pursuer’s speed (Fig. 11). In the coordinate system(E1 PE2 ), the final velocity VT of the target T consists of the velocity of the medium uP and the velocity of the target uT in the world coordinate system:VT = uP + uT . Figure 12 shows that the velocity uT of the target is directed at an angle ϕ to the line perpendicular to the line (PT ).If uT is the velocity modulus of the target, then its projection onto the line (QT) will be equal to uT ·cos(α − ϕ), where (QT ) is perpendicular to the line bounding the pursuer’sdetectionarea (Fig. 12).
98
D. Alexander
Fig. 11. Angular detection area of the pursuer.
Fig. 12. Decomposition of the target velocity vector
The projection of the velocity uT onto the line (QT) is equal to uP · sin(α). Then the projection on the line (QT) of the resulting velocity VT will be T ·cos(α − ϕ)+uP ·sin(α). Then the time it takes for the target T to reach the angle line will be: |QT | . uT · cos(α − ϕ) + uP · sin(α) The exit time from the detection area will reach its minimum, subject to ϕ = α, when the velocity uT of the target is directed along the line (QT ). Figure 11 shows exactly such a case. The target will exit the detection area at point W. The resulting velocity VT is directed along a straight line (WT ). Figure 13 shows the first frame from the result of the program in a computer mathematics system. The pursuer P pursues the target T in the world coordinate system. The speed vector of the pursuer is directed at the target.
Geometric Models of Pursuer Avoidance
99
The task is to build the optimal target’s trajectory in order to leave the detection area as quickly as possible.
Fig. 13. The model of the angular detection area of the pursuer.
Figure 13 is supplemented with a link to the animated image [10], where you can look at the iterative process of calculating the trajectory of the target.
3 Results and Discussion In this work, geometric modeling of the process of evading the target from the pursuer moving uniformly and rectilinearly is performed. The pursuer in the article models has circular, sector and angular detection areas. When writing the article, it was meant that when modeling the adaptive behavior of participants in the pursuit task in the future, the pursuer will adhere to the chase method. The difficulties of the described methodology include the fact that the target for maneuvering must have data on the speed of the pursuer and on the characteristics of the detection area. It is assumed that this technique can be implemented in virtual reality systems, when modeling, for example, missile evasion from air defense systems.
100
D. Alexander
4 Conclusion In the article, evasion from pursuers with circular, sector and angular detection areas was simulated. The movement of the participants in the pursuit task occurs on a plane with a constant speed. The target evades the pursuer along the optimal trajectory. According to the results of the research, animated images were made, which show the process of evading the target from the pursuer.
References 1. Abramyants, T.G., Galyaev, A.A., Maslov, E.P., Yakhno, V.P.: Evasion of a moving object from detection in a conflict environment. Management of large systems: a collection of works 79, 112–184 (2019) 2. Galyaev, A.A., Lysenko, P.V., Yakhno, V.P.: Evasion of a moving object from a single detector at a given speed. Problems of management 1, 83–91 (2020) 3. Galyaev, A.A.: The problem of evasion from a moving single observer on a plane in a conflict environment. Automation and telemechanics 75(6), 28–37 (2014) 4. Andreev, K.V., Rubinovich, E.Ya.: Trajectory control of the observer. Automation and telemechanics 77(1), 134–162 (2016) 5. Galyaev, A.A., Maslov, E.P., Abramyants, T.G., Yakhno, V.P.: Evasion of a moving object from detection by the system of observers: sensor - maneuverable means. Automation and telemechanics 8, 113–126 (2017) 6. Video, simulation results of the pursuit problem. https://www.youtube.com/watch?v=C7i mOTjEgQQ Accessed Nov 11 2022 7. Video, simulation results of the pursuit problem. https://www.youtube.com/watch?v=0-qA8 Q2fcwE Accessed Nov 11 2022 8. Video, simulation results of the pursuit problem. https://www.youtu.be/arCQYrLZGxk Accessed Nov 11 2022 9. Video, simulation results of the pursuit problem. https://www.youtu.be/dBGalSwRclg Accessed Nov 11 2022 10. Video, simulation results of the pursuit problem. https://www.youtube.com/watch?v=lvHvKz EjU4U Accessed Nov 11 2022
Human Posture Analysis in Working Capacity Monitoring of Critical Use Equipment Operators Maxim Khisamutdinov1
, Iakov Korovin2
, and Donat Ivanov2(B)
1 LLC “Research Institute of Multiprocessor Computing and Control Systems”,
150-g Socialisticheskaya st., Taganrog 347905, Russia 2 Southern Federal University, 2 Chehova st., Taganrog 347922, Russia
[email protected]
Abstract. An urgent task is to control the preservation of the health and performance of operators of equipment of high importance. For example, operators of nuclear power plants. Operators undergo regular medical examinations. Nevertheless, it is necessary to carry out constant monitoring of the health during the operator’s work shift. Systems for non-contact monitoring of the physiological and psycho-emotional states of operators are being developed. In such systems, it is necessary to track some specific postures of the human body. This article proposes a method and algorithm for analyzing the configuration of a human “skeleton” from a video image. The results of practical implementation are given. Keywords: Pose Estimation · Body Parts Recognition · Pose Recognition · Video Recognition
1 Introduction The loss of the operator’s ability to work during the control of equipment at facilities of increased responsibility (such as power plants, spacecraft, etc.) can lead to serious consequences and even technological disasters. In this regard, various measures are being taken to prevent the possible loss of the operator’s ability to work of critical importance. Employees regularly undergo medical examination and psychological examination. However, there is always the possibility that the operator will be unable to perform his duties due to a sudden deterioration in health, or other unforeseen reason. In this regard, it is relevant to develop systems that will monitor the state of the operator in real time and signal if the operator loses his ability to work. Such systems are similar to driver sleep detection systems [1–8]. Only in the case of operators of important equipment, it is necessary to monitor not only falling asleep, but also a number of other possible health problems, as well as possible deviations of psychological deviations in the behavior of the operator. This article presents some results of the development of a system for remote noncontact warning of a possible loss of operability by the operator of critical equipment, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 101–107, 2023. https://doi.org/10.1007/978-3-031-35311-6_11
102
M. Khisamutdinov et al.
based on the analysis of video from cameras. The system recognizes the posture, the nature of the operator’s movement and determines the speed of his reaction and other features of the psycho-emotional and physiological state. One of the tasks to be solved is to identify some specific poses of the operator from the video. Unlike trips using depth cameras [9], this approach uses only color cameras. The proposed method and the algorithm that implements it are presented in this paper.
2 Analyzing the Individual’s Skeleton Configuration 2.1 Proposed Method for Analyzing the Configuration of an Individual’s Skeleton The algorithm for analyzing the configuration of the “skeleton” of an individual is designed to select a “skeleton” suitable for further assessment of the configuration (posture) of a person. Based on the posture analysis, the human posture is detected. Also implemented is the possibility of adapting the algorithm to work with additional key features / desired poses. The algorithm for analyzing the configuration of the human “skeleton” is based on the following mathematical relationships. At the first stage of the algorithm, the input image I is processed using the OpenPose library [10, 11] in order to obtain a tuple of key points M (each element are pixel coordinates (X, Y ) of the key point), describing the human skeleton: OpenPose(I ) → M
(1)
Further, the algorithm is based on the heuristic ratios of body parts; the Fig. 1 shows a skeleton with key points.
Fig. 1. Schematic representation of the key points of the human skeleton in image recognition
Human Posture Analysis in Working Capacity Monitoring
103
Let us introduce notation to describe the ratios of body parts when highlighting the key features of the desired poses. The distance between two key points A and B will be denoted as L A,B . Note that L A,B = L B,A , we denote the visibility of the point A by V A , if the point A is not visible ¬ (V A ). Point A is above point B U A,B . Conjunction “&”, disjunction “|”, inversion “¬”. Consider the logical formulas of the desired poses. following formulas were proposed to determine the suitability of the skeleton for further posture analysis (S S ): SS = (V4 |V7 )&V0 &V1 &V2 &V5 Intentional concealment of the face (F S ): FS = V0 & V4 &L4,0 < L1,0 | V7 &L7,0 < L1,0
(2)
(3)
Hands raised above head (H S ): HS = V0 &V4 &V7 &U4,0 &U7,0
(4)
Head down/shoulders up (X S ): XS = V0 &V1 &V2 &V5 &U 4,0 &U2,1 &U5,1 & L1,0 < L2,0 & L1,0 < L5,0
(5)
Hands at chest level (GS ): GS = V0 &V1 &V 4 &V7 &V8 & & L4,0 − L4,8 < L1,0 & L7,0 − L7,8 < L1,0
(6)
Torso tilt, the person is lying (N S ): NS = V1 &V8 &V9 &V12 & U8,1 |U9,1 |U12,1
(7)
It should be noted that the algorithm can be supplemented with other key features / desired poses. The result of the algorithm is a tuple OS of six Boolean values of the functions (2) – (7): OS = (SS , FS , HS , XS , GS , NS , )
(8)
2.2 Algorithm for Analyzing the Configuration of the Skeleton of an Individual Figure 2 shows a block diagram of the algorithm for analyzing the configuration of an individual’s skeleton. The beginning of the implementation of the algorithm for analyzing the configuration of the “skeleton” of a person consists in the input of a three-color image I, for the representation of which the OpenCV container [12, 13] cv::Mat is used. Then, at the next step, the human skeleton is detected using the OpenPose library [10, 11] – the result is a tuple of key points M (1). Next, the output resulting tuple OS is initialized with six (according to the number of sought key features / sought positions) logical zeros.
104
M. Khisamutdinov et al. Start I
Input image I Obtaining skeleton keypoints M Tuple OS initialization
No
Yes
SS Expression (2) is executed
O0[0]=SS FS Expression (3) is executed
No
Yes OS[1]=FS
HS Expression (4) is executed
No
Yes OS[2]=HS
XS Expression (5) is executed
No
Yes OS[3]=XS
GS Expression (6) is executed
No
Yes OS[4]=GS
No
NS Expression (7) is executed
Yes OS[5]=NS
Output tuple OS
Tuple OS
End
Fig. 2. Block diagram of the algorithm for analyzing the configuration of the skeleton of an individual
Human Posture Analysis in Working Capacity Monitoring
105
At the next step, the performance of the skeleton suitability function for further analysis of the pose (2) is checked, and in the case of a negative result, the transition to the output of the resulting OS tuple is performed and the algorithm for analyzing the configuration of the human skeleton is completed. If condition (2) is successfully met, the first element of the OS tuple is assigned a logical unit and the transition to the next step of the algorithm takes place. At the next step, the execution of the function of intentionally hiding a person’s face (3) is checked, in the case of a negative result, the transition to the next step of the algorithm is performed, in the case of a positive result, the second element of the OS tuple is assigned a logical unit and the transition to the next step algorithm work. Then check the execution of the function, hands raised above the head (4), in the case of a negative result, the transition to the next step of the algorithm is performed, in the case of a positive result; the third element of the OS tuple is assigned a logical unit and the transition to the next step of the algorithm. Checking the execution of the function lowered head / raised shoulders (5) in the case of a negative result, the transition to the next step of the algorithm is performed, in the case of a positive result, the fourth element of the OS tuple is assigned a logical unit and the next step of the algorithm is checked function of the hand at chest level (6), in case of a negative result, the transition to the next step of the algorithm is performed, in the case of a positive result, the fifth element of the OS tuple is assigned a logical unit and the transition to the next step of the algorithm is performed. Inclination of the torso, the person lies (7) in case of a negative result, the next step of the algorithm is performed, in the case of a positive result, the sixth element of the OS tuple is assigned a logical unit and the transition to the next step of the algorithm is the output of the resulting OS tuple, and then completion of the algorithm for analyzing the configuration of the human skeleton.
3 Practical Implementation of the Proposed Method Based on the proposed method and algorithm for analyzing the configuration of the skeleton of an individual, software was developed that allows tracking the specific postures of equipment operators. Figure 3 shows a fragment of a screenshot of the software during the detection of the “hands up” pose. Figure 4 shows graphs during online monitoring of the operator’s postures. The obtained results are transferred to the parameter analysis block. The results of the experiments and practical application of the developed software based on the proposed algorithm allow us to conclude that it is efficient and has low computational complexity.
106
M. Khisamutdinov et al.
Fig. 3. A fragment of a screenshot of the program at the moment when the operator raised his hands up
Fig. 4. Charts of online monitoring of operator’s postures
4 Conclusions and Future Work The method and algorithm for analyzing the configuration of an individual’s skeleton, proposed in this article, was used in the development of software for online monitoring of the state of equipment operators in areas of critical application. The results of experimental studies have shown its efficiency and low computational complexity.
Human Posture Analysis in Working Capacity Monitoring
107
Further work is aimed at developing algorithms for analyzing the physiological and psycho-emotional state of the operator in order to timely prevent possible equipment failures. Such systems are designed to improve the safety of equipment use and to provide timely medical assistance to operators in case of illness. The developed systems will find application in such areas as energy, transport, etc. Acknowledgment. The study was carried out within the framework of the scientific program of the National Center for Physics and Mathematics (direction No. 9 “Artificial intelligence and big data in technical, industrial, natural and social systems”).
References 1. Babu, T., Ashwin, S., Naidu, M., Muthukumaaran, C., Ravi Raghavan, C.: Sleep Detection and alert system for automobiles. In: Hiremath, S.S., Shanmugam, N.S., Bapu, B.R.R. (eds.) Advances in Manufacturing Technology. LNME, pp. 113–118. Springer, Singapore (2019). https://doi.org/10.1007/978-981-13-6374-0_14 2. Huda, C., Tolle, H., Utaminingrum, F.: Mobile-based driver sleepiness detection using facial landmarks and analysis of EAR Values. Interact. Mob. Technol., 16–30 (2020) 3. Chirra, V.R.R., Uyyala, S.R., Kolli, V.K.K.: Deep CNN: A machine learning approach for driver drowsiness detection based on eye state. Rev. d’Intelligence Artif. 33, 461–466 (2019) 4. Guede, F., Fernandez-Chimeno, M., Ramos-Castro, J., Garcia-Gonzalez, M.A.: Driver drowsiness detection based on respiratory signal analysis. IEEE Access 7, 81826–81838 (2019) 5. Yauri-Machaca, M., Meneses-Claudio, B., Vargas-Cuentas, N., Roman-Gonzalez, A.: Design of a vehicle driver drowsiness detection system through image processing using matlab. In: 2018 IEEE 38th Central America and Panama Convention (CONCAPAN XXXVIII), pp. 1–6 (2018) 6. Watling, C.N., Hasan, M.M., Larue, G.S.: Sensitivity and specificity of the driver sleepiness detection methods using physiological signals: A systematic review. Accid. Anal. Prev. 150, 105900 (2021) 7. Sikander, G., Anwar, S.: Driver fatigue detection systems: A review. IEEE Trans. Intell. Transp. Syst. 20, 2339–2352 (2018) 8. Raju, J., Rakesh, P., Neelima, N.: Driver drowsiness monitoring system. In: Intelligent Manufacturing and Energy Sustainability, pp. 675–683. Springer (2020) 9. Korovin, I., Ivanov, D.: Human pose estimation applying ANN while RGB-D cameras video handling. In: Silhavy, R. (ed.) CSOC 2020. AISC, vol. 1225, pp. 573–585. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-51971-1_47 10. Cao, Z., Hidalgo, G., Simon, T., Wei, S.E., Sheikh, Y.: OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields (2018). https://doi.org/10.1109/tpami.2019. 2929257 11. Qiao, S., Wang, Y., Li, J.: Real-time human gesture grading based on OpenPose. In: 2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), pp. 1–6 (2017) 12. OpenCV. https://opencv.org 13. Kaehler, A., Bradski, G.: Learning OpenCV 3: computer vision in C++ with the OpenCV library. “O’Reilly Media, Inc.” (2016)
Linear Complexity of New r-Ary Sequences of Length pn qm Vladimir A. Edemskiy(B)
and Sergey V. Garbar
Yaroslav-the-Wise Novgorod State University, 41, B. St.-Petersburgskaya Street, Veliky Novgorod 173003, Russia [email protected]
Abstract. In this paper, we construct the family of non-binary sequences and show that these sequences have high linear complexity over the finite field of odd order. These sequences are constructed with using new generalized cyclotomic classes modulo pn qm . We generalized the results about the linear complexity of new generalized cyclotomic binary and non-binary sequences obtained earlier. #CSOC1120. Keywords: Linear complexity · Generalized cyclotomy · Non-binary sequences
1 Introduction The linear complexity is one of the important characteristics of the unpredictability of a sequence. The linear complexity over the finite field of a sequence is the least order of a linear recurrence relation to which the terms of a sequence satisfy. It is well known that we can obtain the families of sequences with high linear complexity using cyclotomic classes and generalized cyclotomic classes. Such sequences are called cyclotomic and generalized cyclotomic sequences, respectively. There are lots of works devoted to the study of the linear complexity of cyclotomic sequences and the generalized cyclotomic sequences based on Whiteman’s and Ding–Helleseth’s generalized cyclotomies and their use [1–5]. Recently another generalized cyclotomy was presented by Zeng et al. in [6]. The linear complexity of binary sequences based on the new generalized cyclotomic classes from [6] is studied in [7–11]. These results were generalized in [12] for sequences with periods pn and 2pn over the finite field. According to [12] these sequences have high linear complexity over the finite field. In this paper, we construct the family of non-binary sequences using new generalized cyclotomic classes from [6] and show that these sequences have high linear complexity over the finite field of odd order. Thus, we continue the study of linear complexity of new generalized cyclotomic sequences started in [7–12].
2 Preliminaries First of all, we will recall some basics of the linear complexity of a periodic sequence, the definition of new generalized cyclotomic classes from [6] and give the definition of sequence. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 108–115, 2023. https://doi.org/10.1007/978-3-031-35311-6_12
Linear Complexity of New r-Ary Sequences of Length pn qm
109
2.1 Linear Complexity Let s∞ = (s0 , s1 , s2 , . . . ) be a sequence of period N and S(x) = s0 + s1 x + · · · + sN −1 xN −1 . For a N -periodic sequence s∞ over the Fr (the finite field of r elements), we recall that the linear complexity over Fr , denoted by L(s∞ ), is the least integer L such that {si } satisfies si+L = cL−1 si+L−1 + · · · + c1 si+1 + c0 si for i ≥ 0, where c0 = 0, c1 , . . . , cL−1 ∈ Fr . It is well known (see, for instance, [3]) that the linear complexity of s∞ is given by L(s∞ ) = N − deg(gcd(xN − 1, S(x))). More specifically, by Blahut’s theorem the linear complexity of s∞ can be given by L(s∞ ) = N − |{i ∈ ZN |S(ξ i ) = 0}|,
(1)
where ξ is a primitive N -th root of unity in an extension field of Fr and ZN is the ring of integers modulo N for a positive integer N . Thus, we can examine the roots of S(x) in an extension of Fr to derive the linear complexity of s∞ by the above formula. 2.2 Generalized Cyclotomic Classes In this subsection we recall the definition of the new generalized cyclotomic classes introduced in [6]. Let p be an odd prime and p = ef + 1, where e, f are positive integers. Let gp be a primitive root modulo pn . It is well known that the order of gp modulo pj is equal to ϕ(pj ) = pj−1 (p − 1), where ϕ(·) is the Euler’s totient function [13]. Let n be a positive integer. For j = 1, 2, · · · , n, we let fj denote pj−1 f and define t·f (pj ) D0 = gp j (mod pj )|0 ≤ t < e , and (pj ) (pj ) (pj ) Di = g i D0 = g i x(mod pj ) : x ∈ D0 , 1 ≤ i < fj . (pj )
, i = 0, 1, · · · , fj − 1, are called generalized cyclotomic classes of j (p ) (pj ) (pj ) order fj with respect to pj . It was shown in [6] that D0 , D1 , . . . , Dfj −1 forms a partition of the multiplicative group Z∗pj for each integer j ≥ 1 and for an integer k ≥ 1, The cosets Di
Zpk =
j −1 k f
(pj )
pk−j Di
∪ {0}.
j=1 i=0
Let r > 2 be a prime such that r|f and b be an integer such that 0 ≤ b < pn−1 f . Denote fj /r = pj−1 f /r by dj and define r sets (pj )
H0
dj −1
=
i=0
(pj )
D(i+b)(mod fj ) ,
110
V. A. Edemskiy and S. V. Garbar
(pj )
H1
2dj −1
=
(pj )
rdj −1
(pj )
D(i+b)(mod fj ) , . . . , Hr−1 =
i=dj
(pj )
D(i+b)(mod fj )
i=(r−1)dj
and (pn )
Ct
=
n
(pj )
, t = 0, 1, . . . , r − 1.
(pn )
∪ · · · ∪ Cr−1 ∪ {0}
pn−j Ht
j=1
It is clear that (pn )
Zpn = C0
∪ C1
(pn )
(pn )
and |Ct | = (pn − 1)/r. The linear complexity of a family of almost balanced r-ary sequences defined by these new generalized cyclotomic classes was studied in [12] when p is such that r p−1 ≡ 1(mod p2 ). By [14] such p is not frequent. Let q be an odd prime different from p and q = eh + 1, where e, h are positive integers. Similarly, for a positive integer m and j = 1, 2, · · · , m we can also consider (qj )
, i = 0, 1, · · · , hj − 1 of order hj with respect j (q ) (qj ) (qj ) where hj = In this case, cosets D0 , D1 , . . . , Dhj −1 form a partition to of the multiplicative group Z∗qj for each integer j ≥ 1 and for an integer l ≥ 1,
the generalized cyclotomic classes Di qj ,
qj−1 h.
Zql =
j −1 l h
(qj )
pl−j Di
∪ {0}.
j=1 i=0
Farther, let r|h and c be an integer such that 0 ≤ c < qn−1 h. Denoting hj /r = qj−1 f /r (qj )
by ej we can also define r sets Ht (qm )
Ct
=
m
, t = 0, 1, . . . , r − 1 and (qj )
qm−j Ht
, t = 0, 1, . . . , r − 1.
j=1
In this case, we get (qm )
Zqm = C0
(qm )
∪ C1
(qm )
∪ · · · ∪ Cr−1 ∪ {0}.
. Now we consider the partition of ZN where N = pn qm . We have the equivalence Zpk ql ∼ = Zpk × Zql , which is relative to the isomorphism φ(a) = amod pk , amod ql [13]. For k = 1, . . . , n; l = 1, . . . , m we define k (pk ql ) (p ) Ct = φ−1 Ct × Z∗ql , t = 0, 1, . . . , r − 1. Let Et =
m n k=1 l=1
(pk ql )
pn−k qm−l Ct
∪
n k=1
(pk )
pn−k qm Ct
∪
m l=1
(ql )
pn qm−l Ct
, t = 0, 1, . . . , r − 1.
Linear Complexity of New r-Ary Sequences of Length pn qm
111
Then ZN =
r−1
Et ∪ {0}.
t=0
It is easy to show that Et is a union of new generalized cyclotomic classes modulo N presented in [6]. We can define a balanced r-ary sequence u∞ with period N = pn qm as follows: 0, if i(mod N) ∈ E0 ∪ {0}, (2) ui = 1, if i(mod N) ∈ Et , t = 1, 2, . . . , r − 1. In this paper we study the linear complexity of these sequences over the finite field of odd order r and show that they have high linear complexity.
3 The Properties of Polynomial of Sequence To begin with, we define the subsidiary polynomials. Let Ub (x) =
r−1 n
t
k=1 t=0
xjp
n−k
(pk )
j∈Ct
and Uc (x) =
r−1 m
l=1 t=0
t
xjq
m−l
.
(qm ) j∈Ct
Let α be a primitive N -th root of unity in the extension of Fr . Since gcd(pn , qm ) = 1 m n then there exist integers v, s such that vpn + sqm = 1. Define β = α sq and γ = α vp . n m Then α = βγ , also β and γ are primitive p -th and q -th roots of unity, respectively. n−k m−l Let βk = β p , k = 1, 2 . . . , n and γl = γ p , l = 1, 2 . . . , m. As usual, we let Fr (β1 ) denote a simple extension of Fr obtained by adjoining an algebraic element β1 [15]. Further, this was shown in [12] that if r : r p−1 ≡ 1(mod p2 ) then Ub (βni ) = 0 for i : i ≡ 0(mod pn−1 ). Using the same method as for the proof of Proposition 2 from [12] we can obtain more general proposition characterizing some properties of Ub (x). We let ordn (r) denote the order of r modulo n. Lemma 1. Let notations be as above and K = Fr (β1 ). (pn )
(i) If Ub (β j ) ∈ K for n > 1 then j ≡ 0(mod pn−1 ). m (pn ) (pn ) (ii) Let j ∈ pn−k qm−1 Z∗pk qm−1 , 1 ≤ k ≤ n and Ub (β j ) + Ub (β jq ) ∈ K for n−1 n > 1, r > 2. Then j ≡ 0(mod p ). (p) j (iii) {j : Ub (β1 ) = 0, j = 1, 2, . . . , p − 1} ≤ δp ordp (r) where 0 ≤ δp ≤
p−1 r·ordp (r) .
The second statement of this lemma does not hold when r = 2. (qm ) Similarly, the properties of Uc (X ) are the same (of course, we need to use qm instead of pn ).
112
V. A. Edemskiy and S. V. Garbar
3.1 The Values of Subsidiary Polynomials N −1 (N ) Let Ub,c (X ) = i=0 ui X i be the generating polynomial of u∞ . In this subsection we (N )
will show that the values of subsidiary polynomials define the values of Ub,c (α j ). Lemma 2. Let j = qf j0 and gcd (j0 , q) = 1, 0 ≤ f ≤ m. Then (pn )
(β jq
(pn )
(β jq ) for f = m.
(N ) j (i) Ub,c (α ) = Fb
(N ) j (α ) = Fb (ii) Ub,c
m−f −1
(pn )
) + Fb
m
(qm )
(β jq ) + Fc
n
(γ jp ) for f < m,
m
Proof. By (2) we see that r−1 (N ) ij Ub,c (α j ) = nk=1 m (pk ql ) α + l=1 t=0 t i∈pn−k qm−l Ct n r−1 m r−1 ij k α + k=1 l=1 t=0 t t=0 t n−k m (p ) n m−l i∈p
q Ct
i∈p q
(ql )
Ct
α ij . (pk ql )
To with, begin we study the first sum in the last relation. Since Ct (pk ) ∗ −1 Ct × Zql and α = βγ we get φ
(pk ql ) i∈pn−k qm−l Ct
It is clear that γ p
n−k qm−l
m
r−1 n
α ij =
t
k=1 l=1 t=0
β ujp
(pk ) u∈Ct
n−k qm−l
·
γ ujp
n−k qm−l
=
.
u∈Z∗l q
is a primitive ql -th root of unity. Hence
m−f −1 (pn )
Ub (β jq ), if f < m, ij α = 0, if f = m. k l (p q )
i∈pn−k qm−l Ct
Further, using the definition of subsidiary polynomials and equalities α = βγ , n m β p = 1 and γ q = 1 we get that r−1 n r−1 ij jpn−k qm γ jpn−k qm = (pk ) α = (pk ) β k=1 t=0 t t=0 t i∈pn−k qm Ct j∈Ct n n r−1 ipn−k qm = U (p ) (β jqm ). (pk ) β k=1 t=0 t b
n
k=1
i∈Ct
and m l=1
m
l=1
r−1 t=0
r−1 t=0 t
(qm )
If f = m then Uc lemma.
α ij =
(ql ) i∈pn qm−l Ct ipn qm−l (ql ) i∈Ct
γ
=
r−1 (ql ) l=1 t=0 t j∈Ct (qm ) jpn Uc (γ ).
m
β jp
n qm−l
γ jp
n qm−l
=
(1) = 0 over Fr for r > 2. Thus, this completes the proof of this
Linear Complexity of New r-Ary Sequences of Length pn qm
113
4 The Linear Complexity of Sequence Here and further we will always suppose that r p−1 ≡ 1(mod p2 ), r q−1 ≡ 1(mod q2 ) and gcd(p, q − 1) = gcd(p − 1, q) = 1. First, we prove the subsidiary lemma. n−k Lemma 3. Let K = Fr (β1 ) ∩ Fr (γ1 ) and βk = β p , k = 1, . . . , n and γl = m−l γ q , l = 1, . . . , m. Then Fr (βk ) ∩ Fr (γl ) = K for k = 1, . . . , n; l = 1, . . . , m. Proof. Let F = Fr (βk ) ∩ Fr (γl ). Let [Fr (βk ) : Fr ] denote the dimension of the vector space Fr (βk ) over Fr [15]. Then [F : Fr ] divides [Fr (βk ) : Fr ] and [Fr (γl ) : Fr ]. According to [12] we know that [Fr (βk ) : Fr ] = pk−1 r1 where r1 = [Fr (β1 ) : Fr ] and [Fr (γl ) : Fr ] = ql−1 t1 where t1 = [Fr (γ1 ) : Fr ] for p, q such that r p−1 ≡ 1(mod p2 ), r q−1 ≡ 1(mod q2 ). Hence [F : Fr ] divides gcd(pk−1 r1 , qm−1 t1 ). By the condition gcd(p, q − 1) = gcd(p − 1, q) = 1, then [F : Fr ] divides gcd(r1 , t1 ). It is clear that [K : Fr ] = gcd(r1 , t1 ), thus we get [F : Fr ] divides [K : Fr ]. Since K ⊂ F, this completes the proof of this lemma. Our main result about the linear complexity of the non-binary sequences over Fr with period pn qm is the following statement. Theorem 1. Let r p−1 ≡ 1(mod p2 ), r q−1 ≡ 1(mod q2 ), gcd(p, q − 1) = gcd(p − 1, q) = 1, N = pn qm and let u∞ be defined by (2). Then LC(u∞ ) = N − δN · ordpq (r) − δp · ordp (r) − δq · ordq (r), p−1 q−1 , 0 ≤ δq ≤ r·ord and 0 ≤ δN ≤ where 0 ≤ δp ≤ r·ord p (r) q (r) Proof. As noted before in (1) we have
(p−1)(q−1) r·ordpq (r)
for r > 3.
(N )
L(u∞ ) = N − |{j : Ub,c (α j ) = 0, j = 0, 1, . . . , N − 1}|. (N )
First of all we note that by definition Ub,c (1) ≡ 0(mod r). (N )
Let Ub,c (α j ) = 0, j : 1 ≤ j ≤ N − 1. We consider a few cases. (i) Suppose j ≡ 0(mod qm ); then according to Lemma 2 we have (pn )
(N ) j Ub,c (α ) = Ub (pn )
for j = qm j0 and gcd(j0 , q) = 1. Hence Ub get
m
(β jq ) m
(β jq ) = 0. In this case, by Lemma 1 we
(N )
|{j : Ub,c (α j ) = 0|j = qm , . . . , (pn − 1)qm }| = (p)
j
|{j : Ub (β1 ) = 0, j = 1, 2, . . . , p − 1}| = δp · ordp (r). (ii) Let j ≡ 0(mod qm ). Then (pn )
(N )
Ub,c (α j ) = Ub
(β jq
m−f −1
(pn )
) + Ub
(qm )
m
(β jq ) + Uc
for j = qf j0 , f < m, gcd(j0 , q) = 1. We see that (pn )
Ub
(β jq
m−f −1
(pn )
) + Ub
m
(q)
n
n
(γ jp )
(β jq ) = −Uc (γ jp ).
114
V. A. Edemskiy and S. V. Garbar (qm )
(qm )
n
n
Hence Uc (γ jp ) ∈ Fr (β). Then by Lemma 3 we get Uc (γ jp ) ∈ K = Fr (β1 ) ∩ m−f −1 m (pn ) (pn ) Fr (γ1 ) and Ub (β jq ) + Ub (β jq ) ∈ K. According to Lemma 1 we have j ≡ 0(mod qm−1 ), j ≡ 0(mod qm ) and j ≡ 0(mod pn−1 ). First, we consider the case when j ≡ 0(mod pn ). In this case, (pn )
Ub
(pn )
(β j ) = Ub
m
(β jq ) = 0
and we have (N )
|{j : Ub,c (α j ) = 0|j = pn qm−1 , . . . , pn (q − 1)qm−1 }| = (q)
|{j : Uc (γ1i ) = 0, i = 1, 2, . . . , q − 1}| = δq · ordq (r). Now suppose j ≡ 0(mod pn ); then j = pn−1 qm−1 t and gcd(t, pq) = 1. Thus, according to [12] and Lemma 2 (p)
(N )
tqm−1
Ub,c (α j ) = Ub (β1 (N )
(p)
tq2m−1
(N )
u
) + Ub (β1
(q)
tp2n−1
) + Uc (γ1
).
It is clear that if Ub,c (α j ) = 0 then Ub,c (α j )r = 0 for u = 0, 1, . . . , ordpq (r). (N )
Hence, |{j : Ub,c (α j ) = 0|j ∈ Zpq | = δN · ordpq (r). f /r h/r
Let w = ζp ζq . Then w ≡ ηf /r (mod p) and w ≡ ξ h/r (mod q). So, by [12] we obtain (p)
tqm−1
(N ) wj Ub,c (α ) = Ub+f /r (β1
(p)
tq2m−1
) + Ub+f /r (β1 (N )
(q)
tp2n−1
) + Uc+h/r (γ1
).
(N )
Hence, according to [12] we have Ub,c (α wj ) = Ub,c (α j ) + 3. Thus, 0 ≤ δN ≤
(p−1)(q−1) r·ordpq (r)
for r = 3. This completes the proof of this theorem. To verify the theoretical results of the work a simple Python program was developed. Program was used to check the statements above for different prime numbers p, q and n, m, f, h.
5 Conclusion We studied the linear complexity of r -ary sequences over the finite field of r order. These sequences are constructed using new generalized cyclotomic classes modulo pn qm . It is shown that the linear complexity of the new sequences is large enough to effectively resist the attacks of the Berlekamp-Massey algorithm. We generalized the results about the linear complexity of new generalized cyclotomic binary and non-binary sequences with periods pn and 2pn obtained earlier. It may be interesting to study the linear complexity of sequences based on the new generalized cyclotomy with other modules. Acknowledgements. The reported study was supported by Russian Science Foundation, research project № 22-21-00516.
Linear Complexity of New r-Ary Sequences of Length pn qm
115
References 1. Ding, C., Helleseth, T.: New generalized cyclotomy and its applications. Finite Fields Appl. 4(2), 140–166 (1998) 2. Chen, X., Chen, Z., Liu, H.: A family of pseudorandom binary sequences derived from generalized syclotomic classes modulo pm+1 qn+1 . Int. J. Netw. Secur. 22(4), 610–620 (2020) 3. Cusick, T., Ding, C., Renvall, A.: Stream Ciphers and Number Theory. North-Holland Publishing Co., Amsterdam (1998) 4. Wu, C., Chen, Z., Du, X.: The linear complexity of q-ary generalized cyclotomic sequences of period pm. J. Wuhan Univ. 59(2), 129–136 (2013) 5. Hu, L., Yue, Q., Wang, M.: The linear complexity of Whiteman’s generized cylotomic sequences of period pn+1 qm+1 . IEEE Trans. Inf. Theory 58(8), 5534–5543 (2012) 6. Zeng, X., Cai, H., Tang, X., Yang, Y.: Optimal frequency hopping sequences of odd length. IEEE Trans. Inf. Theory 59(5), 3237–3248 (2013) 7. Xiao, Z., Zeng, X., Li, C., Helleseth, T.: New generalized cyclotomic binary sequences of period p2 . Des. Codes Crypt. 86(7), 1483–1497 (2018) 8. Ye, Z., Ke, P., Wu, C.: A further study of the linear complexity of new binary cyclotomic sequence of length pn . AAECC 30(3), 217–231 (2018) 9. Edemskiy, V., Li, C., Zeng, X., Helleseth, T.: The linear complexity of generalized cyclotomic binary sequences of period pn . Des. Codes Crypt. 87(5), 1183–1197 (2019) 10. Ouyang, Y., Xie, X.: Linear complexity of generalized cyclotomic sequences of period 2pm . Des. Codes Crypt. 87(5), 1–12 (2019) 11. Edemskiy, V., Wu, C.: On the k-error linear complexity of binary sequences of periods pn from new cyclotomy. AIMS Math. 7(5), 7997–8011 (2022) 12. Edemskiy, V., Sokolovskii, N.: Linear complexity of new q-ary generalized cyclotomic sequences of period 2pn . Lect. Notes Comput. Sci. 12020, 551–559 (2020) 13. Ireland, K., Rosen, M.: Classical Introduction to Modern Number Theory. Springer, Berlin (1982) 14. Montgomery, P.L.: New solutions of ap−1 ≡ 1(mod p2 ). Math. Comput. 61, 361–363 (1993) 15. Lidl, R., Niederreiter, H.: Finite Fields. Addison-Wesley, Boston (1983)
The Problem of Interpretation of Electronic Documents for Long-Term Keeping Alexander V. Solovyev(B) Federal Research Center “Computer Science and Control” of Russian Academy of Sciences, 44/2 Vavilova Street, Moscow 119333, Russian Federation [email protected]
Abstract. The article presents the problem of interpretability of electronic documents during long-term keeping. The relevance of the problem of interpretation of electronic documents is shown. A review of existing approaches to solving the problem under consideration is made. In the article the statement of the problem of ensuring the interpretability of electronic documents in a general form is carried out. The main difficulties associated with the use of various file formats are identified. The difficulties of developing a file format for long-term keeping are also identified. The article defines the ways of further research. Such studies include: development of a mathematical model of the composition of information in an electronic document; development of a mathematical model for assessing alienability; development of a mathematical model for assessing interpretability; developing methods for long-term keeping to ensure interpretability. Keywords: Long-Term Keeping · Interpretability · Electronic Document · Metadata · File Format
1 Introduction In the author’s earlier studies [1–3], it was shown that the main problem of digital data in long-term keeping is to ensure their safety. Preservation here means the preservation of the entirety of information, that is, both the data itself and the metadata that describe them. When digital data is converted into an electronic document (hereinafter - ELD) [4], which replaces a paper document in office work and, later, storage in an archive, the problem of interpretability of electronic documents comes to the fore. In addition to the safety of digital data and metadata, ELD should not lose information about its appearance, composition of information, its location on a sheet of paper. The information contained in the document must be, firstly, readable. The ELD must be read, the format in which it is stored must be interpreted (decoded) by the hardware and software storage medium. It is often stated in studies that all modern formats are already standardized (see, for example, [5–7]), so the problem of interpreting the EDS is rather far-fetched. However, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 116–121, 2023. https://doi.org/10.1007/978-3-031-35311-6_13
The Problem of Interpretation of Electronic Documents
117
practically none of the developers of ELD formats guarantees that this format will be supported over the next few decades. The choice of a format suitable for long-term keeping is, of course, an important factor for solving the problem of ensuring safety, as shown in [8]. However, the format itself is not a guarantee of long-term keeping. Ideally, the ELD should be displayed to the user in the form in which it was created. Those, the geometric dimensions of the information blocks of the document, their mutual arrangement, fonts, tables, illustrations, etc. must be preserved. And this in itself is a very non-trivial task, given that different interpretation tools can interpret the same document differently even in the same the same format. So, for example, different tools for working with electronic documents, such as Open Office, Libre Office, MS Office, can interpret the same document in DOCX format in different ways. Moreover, even different versions of MS Word interpret a DOCX document differently, resulting in a different number of pages in multipage documents. Therefore, the preservation of the appearance of the original document, especially during long-term keeping, is not guaranteed. The work [1] briefly describes the problem of interpretability of digital data in new information conditions. The solution to the problem of interpretability in general terms comes down to providing the ability to decode the ELD format after decades and present the ELD in its original form on the screen or when printing. Thus, it is possible to formulate the task of ensuring the interpretability of the ELD as providing the ability to read document information in a user-friendly form and visualize the document using software and hardware in the form in which the document was created.
2 Brief Overview of the Problem of Interpretability The largest repository of electronic documents in the world is the US National Archives and Records Administration (NARA, www.archives.gov). NARA oversees the management of federal government records and ensures the preservation of documents of scientific, historical, and practical value. In 2005, NARA decided to develop the ERA (Electronic Records Archives) archive of electronic records for long-term keeping. The project received state support. In order to imagine the size and significance of this electronic archive, I will cite just a few facts. Since 2005, the US Department of State has submitted millions of electronic diplomatic communications to NARA each year. The Pentagon annually transfers 50 million digitized official documents on personnel [9]. However, in 2010, due to the difficulties in creating ERA, the project commissioning date was postponed to 2015, then to 2017. In 2012, the project was suspended and limited possibilities for using ERA were announced [10]. The reason for this was a huge variety of computer data formats, the use of which creates the risk of non-interpretable ELD after decades. Storage in a single ASCII encoding, as was done in NARA before, became impossible. Faced with the challenges of ensuring the interpretability of the ELD, NARA has done a lot to systematize the negative experience.
118
A. V. Solovyev
In 2013, NARA developed a draft open standard for encoded archival description of documents EAD (Encoded Archival Description) based on XML formats (eXtensible Markup Language) [11]. The project defines classes of documents, for each of them a set of recommended and acceptable storage formats is defined. In 2017, NARA released a paper [7]. The requirements reflected in this document can be used by employees of federal agencies when writing technical specifications for ELD management tools or services. In 2017, NARA announced the creation of a new model for the storage and availability of presidential ELDs, the volume of which is estimated at 250 TB per year [12]. Also, in order to partially solve the problem of interpretability, NARA decided after December 31, 2022 to accept for storage ELD in a special digital format with mandatory descriptive metadata [13]. The Australian National Archives, which has existed since 1983, as well as NARA, is at the forefront in terms of storing electronic documents [14]. The National Archives of Australia issued the PROS99/007 Electronic Records Management standard, which contained requirements for the management and storage of electronic records in the public sector. A methodology for the design and implementation of systems for working with DIRKS documentation was developed, which served as the basis for the international standard ISO 15489-2001 [15]. This standard, in particular, is adopted in the Russian Federation as GOST R ISO 15489-1 - 2007. According to the standard, “a usable document is one that can be localized, found, reproduced and interpreted” (GOST R ISO 15489-1-2007, clause 7.2.5). Although the provisions of the document are in the nature of recommendations, nevertheless, the standard defines the necessary minimum characteristics of the document, which must be observed when organizing long-term keeping. In terms of addressing the problem of interpretability, the National Archives of Australia has taken the path of “document normalization”, i.e. bringing all documents received for long periods of storage to a single format. Formats (see [16]) into which documents are converted: a format based on XML (eXtensible Markup Language) and RDF/XML (Resource Description Framework - a data model used to represent resources of the so-called semantic web) A that represents the documents, the elements of the classification (as objects), and the relationships between them (as relations or predicates)). It can be argued that the problem of data interpretability has been partially solved. This is partly because the XML format does not imply an accurate visual display of the original document, but is intended to preserve its information and, possibly, ELD metadata. In 2001 for the European Union by the British company Cornwell Affiliates plc. the specification of the Model Requirements for the Management of Electronic Documents (MoReq [17]) was prepared. While not formally a standard, MoReq is currently a de facto standard. Already in the first version of the MoReq specification, there was a section devoted to the long-term keeping of electronic documents, in which one of the key requirements was the requirement to prefer the use of open, documented formats as opposed to proprietary (commercial) ones in order to reduce the risks of uninterpretable documents in the future with the ability to restore the document using a published description format.
The Problem of Interpretation of Electronic Documents
119
Examples can be given of the archives of Germany, Denmark, Great Britain and other countries. However, from the above brief review of the best practices for organizing long-term keeping of electronic documents, the problem of interpretability and display of electronic documents during long-term keeping is clearly identified. Also, from the above review, we can conclude that there is no universal solution to the problem of interpretability yet, despite the active search for such a solution over the past ten years. Because over time, the number of electronic documents will rapidly increase, therefore, the order of complexity of solving the problem will rapidly increase.
3 Statement of the Problem of Ensuring the Interpretability of Electronic Documents Based on the review, it is possible to formulate the problem of ensuring the interpretability of the ELD and determine the ways for further research necessary to solve the problem. The task of the study is formulated as follows: to ensure long-term keeping of business electronic documents, it is necessary to ensure their interpretability (readability) during the entire period of storage. At the same time, it is assumed: • the safety of the document at the time of its transfer for long-term keeping is confirmed; • documents are not distorted; • there are no restrictions on file formats of documents transferred for long-term keeping; • the hardware and software environment in which it is supposed to ensure the interpretability of documents is subject to constant change (including interpretation tools); • no guarantee for accurate interpretation of the document (file format) after decades. With long-term keeping of ELD, the problem of interpretability and display of data in new information conditions arises, i.e. the ability to decode the stored ELD format after decades and show the document in its original form, for example, display it on the screen or print it. The lack of a strategy in this matter may, after decades, lead to the fact that part of the information cannot be decoded due to the lack (obsolescence) of means for interpreting stored data formats, as well as due to the loss of a description of stored formats, in the case of using closed formats for presenting ELD. Some solution to the problem may be the creation of converters that convert old ELD formats to new ones, but here it should be borne in mind that the later the task of converting data is set, the more difficult it will be to solve it. In addition, this approach implies the presence of constant high overhead costs for maintaining the interpretability of the ELD. Judging from the review above, it is convenient to have a single long-term keeping format (or a set of such formats), many developed countries (USA, Australia, Great Britain) are following this path. We can briefly formulate the requirements for such a format: simple, open and documented. Even such a set of requirements would significantly reduce the likelihood
120
A. V. Solovyev
of “uninterpretable” documents submitted for long-term keeping in this format in the future. For more information about the choice of long-term keeping formats, their features and requirements for the development of a long-term keeping format, see the author’s work [8]. However, there are still a number of issues that need to be taken into account in the long-term keeping of ELDs and which may not depend on the format and which can lead to document distortion and confusion for the user: • any business ELD can contain information about the changes made, comments, invisible text, information about the company and the authors of the changes; • there may be fields in the ELD, the information of which can change, which leads to the distortion of the entire ELD. For example, a field with the current date, which can change when printing an ELD, other macros that can change an ELD when opening or printing; • the ELD may contain hyperlinks to web pages or other related objects (drawings, diagrams, other documents). In this case, the document may be distorted when opened or printed due to the absence of related objects in the right place. It is necessary to save, in addition to the ELD, for long-term keeping, also related documents. These problems allow us to formulate further research directions: • development of a mathematical model of the composition of information for long-term keeping of electronic data; • development of a mathematical model for assessing the alienability of ELD from the hardware and software storage environment; • development of a mathematical model for assessing the interpretability of the ELD; • development of long-term keeping methods to ensure the interpretability of the ELD. The author plans to devote a series of articles to these areas of research.
4 Conclusion The problem of interpretability of electronic documents during long-term keeping is certainly relevant, which is shown in the article based on a brief review of approaches to solving the problem. However, even existing approaches to the solution turn out to be difficult to implement despite the fact that they are proposed in countries with a high development of information technology. This is due to the absence, first of all, of setting the task of ensuring the interpretability of electronic documents, which is carried out in a general form in this article. In addition, the article identifies the main difficulties associated with the use of various file formats, the problems of developing a long-term keeping format. The article also defines the ways for further research, namely: the development of a mathematical model of the composition of the information of an electronic document for long-term keeping; development of a mathematical model for assessing the alienability of an electronic document from a software and hardware storage environment; development of a mathematical model for assessing the interpretability of electronic documents;
The Problem of Interpretation of Electronic Documents
121
development of long-term keeping methods to ensure the interpretability of electronic documents. In the course of further research, it is planned to prepare a series of articles to describe the solution to the problem of interpretability of electronic documents for long-term keeping.
References 1. Solovyev, A.V.: Long-term digital documents storage technology. In: Radionov, A.A., Karandaev, A.S. (eds.) RusAutoCon 2019. LNEE, vol. 641, pp. 901–911. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-39225-3_97 2. Solovyev, A.V.: Authentication control algorithm for long-term keeping of digital data. IOP Conf. Ser. Mater. Sci. Eng. (MSE) 862(5), 052080 (2020). https://doi.org/10.1088/1757899X/862/5/052080 3. Solovyev, A.V.: Digital media inventory algorithm for long-term digital keeping problem. IOP Conf. Ser. Mater. Sci. Eng. (MSE) 919(5), 052003 (2020). https://doi.org/10.1088/1757899X/919/5/052003 4. Solovyev, A.V.: The problem of defining the concept of “Electronic Document for Long-Term Storage”. In: Silhavy, R., Silhavy, P., Prokopova, Z. (eds.) Data Science and Algorithms in Systems, CoMeSySo 2022. LNNS, vol. 597, pp. 326–333. Springer, Cham (2022). https:// doi.org/10.1007/978-3-031-21438-7_26 5. Open Government Partnership UK National Action Plan. London. SW1A 2AS (2013) 6. Pitman, N., Shipman, A.: A manager’s guide to the long-term preservation of electronic documents, London. BIP 0089 BSI (2008) 7. Universal Electronic Records Management (ERM) Requirements. U.S. National Archives and Records Administration (2017). https://www.archives.gov/records-mgmt/policy/univer salermrequirements. Accessed 24 Jan 2022 8. Solovyev, A.V.: Solving the problem of interpreting digital data for long-term keeping. Proc. ISA RAS 71(2), 43–49 (2021). https://doi.org/10.14357/20790279210206 9. Afanasyeva, L.P.: Automated archival technologies. Federal Agency for Education. State Educational Institution of Higher Professional Education Russian State University for the Humanities, vol. 114 (2005) 10. Miller, J.: NARA to suspend development of ERA starting in 2012. FederalNewsRadio.com (2012). http://www.federalnewsradio.com/?sid=2204570&nid=35. Accessed 25 Oct 2022 11. US National Archives Blog (2013). http://blogs.archives.gov/records-express/2013/11/01/ opportunity-for-comment-transfer-guidance-bulletin/. Accessed 26 Oct 2022 12. National Archives Announces a New Model for the Preservation and Accessibility of Presidential Records. U.S. National Archives and Records Administration. https://www.archives. gov/press/press-releases/2017/nr17-54. Accessed 27 Oct 2022 13. Draft National Archives Strategic Plan. U.S. National Archives and Records Administration (2017). https://www.archives.gov/about/plans-reports/strategic-plan/draft-strategicplan. Accessed 28 Oct 2022 14. Ryskov, O.I.: Records management in Australia. Domestic Arch. 2, 82 (2005) 15. Mitchenko, O.Yu.: Application of the international standard ISO 15489-2001 when creating a document management system in an organization. Secretarial Bus. 6 (2004) 16. Blog of the Dutch National Electronic Preservation Coalition. http://www.ncdd.nl/blog/?p= 2860. Accessed 29 Oct 2022 17. Typical requirements for automated electronic document management systems. Specification MoReq. Office for Official Publications of the European Communities as INSAR Supplement VI, ISBN 92-894-1290-9
Prediction of Natural Processes Using a Deep Neural Network Model E. O. Yamashkina1 , S. A. Yamashkin2 , Olga V. Platonova1(B) , and S. M. Kovalenko1 1 MIREA – Russian Technological University, 78, Vernadskogo Prospect, Moscow 119454,
Russia [email protected] 2 National Research Mordovia State University, 68, Bolshevistskaya Street, 430005 Saransk, Russia
Abstract. The purpose of this article is to solve the problem of predicting the susceptibility of a territory to floods based on spatiotemporal data, including meteorological observations, digital elevation models, and Earth remote sensing data. The article proposes a neural network model for geosystem analysis of the territory, which allows you to approach the solution of the designated problem and is characterized by the presence of a large number of degrees of freedom, which allows you to flexibly configure the tool based on the problem being solved and the data being analyzed. The proposed development is part of the knowledge base of the repository of deep machine learning models, which includes a dynamic visualization subsystem based on adaptive web interfaces with an interactive ability to directly edit the architecture and topology of neural network models. The model makes it possible to achieve an accuracy of 92% when solving the problem described above, and the integrated use of all layers made it possible to increase the accuracy by 8% relative to the indicators achieved using only traditional satellite imagery materials. Flexible tuning of the hyperparameters of the model provided an initial increase in accuracy by 6%. #CSOC1120. Keywords: Neural Network · Natural Processes · Machine Learning
1 Introduction At present one of the necessary conditions for making effective decisions in the management of complex objects is the solution of the problems of timely forecasting of the state of the object. The main difficulties in solving them are related to the complexity of collecting initial data, insufficient availability and study of the mechanisms that determine the dynamics of changes in spatio-temporal processes in different parts of the area, as well as a number of other reasons. The development of machine learning technologies, including those based on the use of deep neural network models, makes it possible to carry out high-precision automated monitoring of environmental management systems, to analyze the patterns of manifestation of natural processes and phenomena. Within the framework of this scientific problem, the solution of the problem of classifying the types of land use systems © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 122–127, 2023. https://doi.org/10.1007/978-3-031-35311-6_14
Prediction of Natural Processes Using a Deep Neural Network Model
123
and vegetation cover based on high-resolution remote sensing data using methods and algorithms of deep machine learning is of current importance. The purpose of this article is to solve the problem of predicting the susceptibility of a territory to floods based on spatiotemporal data, including meteorological observations, digital elevation models, and Earth remote sensing data. The article proposes a neural network model for geosystem analysis of the territory, which allows you to approach the solution of the designated problem and is characterized by the presence of a large number of degrees of freedom, which allows you to flexibly configure the tool based on the problem being solved and the data being analyzed. The proposed development is part of the knowledge base of the repository of deep machine learning models, which includes a dynamic visualization subsystem based on adaptive web interfaces with an interactive ability to directly edit the architecture and topology of neural network models. The deployment of a repository of neural network models allows not only to form a knowledge base for the analysis of spatial data, but also to solve the problem of selecting effective algorithms for solving problems in the digital economy. The ontological model is decomposed into domains of deep machine learning models, tasks to be solved, and data, which makes it possible to give a comprehensive definition of a formalized knowledge area: each stored neural network model is compared with a set of specific tasks and datasets.
2 Methodology and Methods of Research The development and experimental substantiation of new geoinformation methods and algorithms for automated analysis of spatial data (space images, digital models and maps, attributive spatial and temporal information) in order to analyze the state of lands and predict natural and man-made emergencies is an urgent challenge of our time. In the last two decades, the practical role of deep machine learning, methods and principles that use many levels of non-linear data processing to extract and transform features, analyze and classify patterns, has been strengthened. The use of deep machine learning makes it possible to reduce the cost of ongoing research in the field of spatial data analysis due to the possibility of accurate interpolation and extrapolation of measurements. The key to solving these problems should be sought not only in improving the architectures of deep machine learning models, but also in developing methods and algorithms for optimal enrichment of training datasets [1–3]. Based on this approach, we formulate the assumption that the accuracy of land classification based on remote sensing can be increased if the neural network analyzes not only the features of a particular system, but also the features of those areas with which it interacts. In order to test this hypothesis, it is necessary to prepare several data sets for training models: basic (consisting of marked samples of territories fixed using satellite imagery) and extended (supplemented with data containing information about neighboring and enclosing geosystems). Let’s move on to describing the methodology for analyzing spatial data using deep machine learning and the formation of a deep neural network model, which is part of the repository, capable of effectively analyzing these data. Justified from the point of view of system analysis of user experience, the graphical web interfaces of the repository of
124
E. O. Yamashkina et al.
deep neural network models make it possible to select a relevant machine learning model for solving specific problems of spatial data analysis, to obtain systematized information about the necessary deep neural network model. The repository ontological model is decomposed into domains of deep machine learning models, design problems to be solved, and spatial data, which makes it possible to give a comprehensive definition of a formalized knowledge area: each stored neural network model is associated with a set of specific tasks and data sets. The system organized in this way enables the relevant search for an effective architectural solution and its fine tuning for solving design problems through a web-based graphical interface of the neural networks repository. To effectively solve the problem of predicting water levels during the spring flood, the following tasks were set and successively solved: revision and data collection, preliminary preparation, development of a deep neural network model, analysis and assessment of the accuracy of the generated model, creation of a hierarchical model of the resulting raster in order to visualize the results within adaptive web interfaces [3]. The study was carried out on the basis of the hypothesis that the problem of forecasting the area of flooding during the spring flood can be effectively solved by working out 3 reference points: information about solar activity and permafrost, etc.); 2) development of a deep neural network model for analyzing data on the geosystem model of the territory; 3) the introduction of geoportal systems for visualizing the results of machine analysis in order to provide information support for making informed management decisions in the field of organizing sustainable development of territories and predicting natural processes.
3 Results and Discussion An analysis of the problem area showed that in order to effectively predict the susceptibility of a territory to flooding in northern latitudes, it is advisable to form a geoinformation model based on the following data layers: a digital elevation model, data on permafrost and ground ice conditions, Earth remote sensing data, meteorological and climatic observations [5, 6]. Data were collected on the test site under study for three years of observation. Due to the fact that the test site is located in northern latitudes, data from the National Snow and Ice Data Center on permafrost and ground ice conditions were collected. ASTER Global Digital Elevation Map data was chosen as a digital elevation model. Thus, it was decided to design a deep learning model that has three inputs. The first input receives data on the analyzed point of the territory during low water periods - its spectral characteristics, data on height, ground ice content. A three-dimensional matrix is fed to the second input, which characterizes, under the same conditions, the neighborhood from the side 31 pixels with a spatial data resolution of 20 m per 1 pixel. The geoinformation model of the enclosing geosystem is built on the basis of the layers presented above. Finally, time series of weather observations are fed to the third input of the model. All data before training the deep neural network model were normalized, in total 20,000 training samples were prepared. The general structure of the model with decomposition into blocks of the upper level of the hierarchy is shown in Fig. 1. Next, we describe the model in detail. The first module solves the problem of analyzing data about the point under study, and the second one makes the resulting decision
Prediction of Natural Processes Using a Deep Neural Network Model
125
Fig. 1. Structure of a deep neural network model for forecasting natural processes.
after combining the outputs of the previous modules. The number of densely coupled layers of a multilayer perceptron and their power are chosen according to the principle of minimizing these parameters while maintaining sufficient classification accuracy. To extract features from data about the neighborhood, the Conv Unit block is introduced, which performs the extraction of hierarchical features of various levels from the analyzed matrix. The batch normalization layer made it possible to achieve regularization and stability of the model. A rectified linear unit was chosen to perform the activation. The feature extraction block is completed by the subdescribing layer, which reduces the size of the resulting representations by taking the maximum. When training the model, the Root Mean Square Propagation algorithm based on the stochastic gradient descent method is used as an optimizer, and cross entropy is used as a loss function. The convolution operation has important properties: it preserves the structure and geometry of the input, is characterized by sparseness and repeated use of the same weights. The operation of deep separable convolution works not only with spatial dimensions, but also with depth measurements, for example, image channels, and, unlike classical convolution, involves the use of separate convolution kernels, based on which two convolutions are successively applied to the original tensor: depth and pointwise. Note that when solving the classification test problems described below, split-testing of models with classical convolution layers and with deep separable convolution was carried out, which confirmed the effectiveness of using the second approach. The next layer of the feature extraction block, the effectiveness of which was tested experimentally, was the batch normalization layer, which makes it possible to achieve regularization and stability of the model [7]. To perform the activation operation, the ReLU function [8]. The feature extraction block is completed by a subsampling layer that uses the maximum operation to reduce the size of the resulting representations and has external outputs. Experiments have shown that the use of the operation of taking the maximum showed the best result. Note that the number of output filters in the convolution and the size of the convolution kernel are proposed to be chosen according to the principle of minimizing these values while maintaining an acceptable classification accuracy. With each new step, which consists in extracting features of the next level, it is recommended to increase the number of output filters for deep separable convolution. The next component block of the model is the feature fusion module. It accepts as input the features of the N level, extracted from the image of the classified area and the images of geosystems associated with it. Also, the merge modules of the second and subsequent levels take the output of the previous merge module as input. All input data is concatenated into a single tensor and processed using the feature extraction pipeline.
126
E. O. Yamashkina et al.
It consists of layers of deep separable convolution, batch normalization, activation, and subsampling. The output of the last feature fusion module is converted into a vector and fed to the input of the multilayer perceptron. The number of densely coupled layers of a multilayer perceptron and their power are chosen according to the principle of minimizing these parameters while maintaining sufficient classification accuracy. In addition, to solve the problem of overfitting, it is recommended to apply batch normalization and decimation to the outputs of the tightly coupled layer. To activate the output of the input and hidden layers, the ReLU function was chosen, for the output layer - Sigmoid for binary classification and Softmax for multiclass. When training a classifier, the RMSProp algorithm based on the stochastic gradient descent method is used as an optimizer, and cross entropy is used as a loss function. The fine tuning of the model is influenced by the features of the specific classification problem being solved. The software implementation of the model is written in Python, the initial data were prepared using the SNAP 7.0 software package.
4 Conclusions The designed deep model based on the application of the geosystem approach is a functional element that accepts data on the territory and its surroundings during the low water period and predicts the susceptibility of the territory to flooding under specific historical meteorological observations. The model makes it possible to achieve an accuracy of 92% when solving the problem described above, and the integrated use of all layers made it possible to increase the accuracy by 8% relative to the indicators achieved using only traditional satellite imagery materials. Flexible tuning of the hyperparameters of the model provided an initial increase in accuracy by 6%. Based on the foregoing, we conclude that the accuracy of land classification based on remote sensing data can be increased if the classifying model takes into account and analyzes not only the properties of a particular territory, but also the characteristic features of the geosystems with which it interacts and, in particular, in which she enters. From the standpoint of the geosystem approach, the properties of the territory are significantly influenced by the enclosing geosystem (neighborhood). It is expedient to analyze geospatial data using deep machine learning to predict the susceptibility of a territory to flooding. The developed neural network model, which is part of the knowledge base of the repository of deep machine learning models, allows solving the problem of improving the accuracy of analysis and classification of spatial data by using a geosystem approach that involves analyzing the genetic homogeneity of territorially adjacent formations of various scales and hierarchical levels. Its introduction into the repository will allow not only to form a knowledge base of models for analyzing spatial data, but also to solve the problem of selecting effective models for solving problems in the digital economy. The advantage of the proposed model is the presence of a high number of degrees of freedom, which allows you to flexibly configure the model based on the problem being solved.
Prediction of Natural Processes Using a Deep Neural Network Model
127
Acknowledgments. This work was supported by Grant of the President of the Russian Federation under Project no. MK-199.2021.1.6.
References 1. Yamashkina, E.O., Kovalenko, S.M., Platonova, O.V.: Development of repository of deep neural networks for the analysis of geospatial data. IOP Conf. Ser. Mater. Sci. Eng. 1047(1), 012124 (2021) 2. Weiss, M., Jacob, F., Duveiller, G.: Remote sensing for agricultural applications: a meta-review. Remote Sens. Environ. 236, 111402 (2020) 3. Sigov, A.S., Tsvetkov, V.Ya., Rogov, I.E.: Methods for assessing the complexity of testing in education. Russ. Technol. J. 9(6), 64–72 (2021) 4. Yamashkin, S., Radovanovi´c, M., Yamashkin, A., Vukovi´c, D.: Improving the efficiency of the ERS data analysis techniques by taking into account the neighborhood descriptors. Data 3(2), 1–16 (2018) 5. Yao, Z., Cao, Y., Zheng, S., Huang, G.: Cross-iteration batch normalization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12326–12335. IEEE, Nashville, TN, USA (2021). https://doi.org/10.1109/CVPR46437.2021.01215 6. Jung, W., Jung, D., Kim, B., Lee, S., Rhee, W., Ahn, J.H.: Restructuring batch normalization to accelerate CNN training. In: Proceedings of Machine Learning and Systems, pp. 14–26, Palo Alto, CA, USA (2019) 7. Yamashkin, S.A., Radovanovic, M.M., Yamashkin, A.A., Barmin, A.N., Zanozin, V.V., Petrovic, M.D.: Problems of designing geoportal interfaces. GeoJournal Tourism Geosites 24(1), 88–101 (2019) 8. Chen, Y., Dai, X., Liu, M., Chen, D., Yuan, L., Liu, Z.: Dynamic ReLU. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12364, pp. 351–367. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58529-7_21
Technological Approaches to the Increased Accuracy Photoplethysmographic Measuring System Design Tatiana I. Murashkina1 , Sergei I. Gerashchenko1(B) , Elena A. Badeeva1 , Mikhail S. Gerashchenko1 , Ekaterina Y. Plotnikova2 , Yuri A. Vasilev3 , Ivan I. Pavluchenco3 , and Armenac V. Arutynov3 1 Penza State University, Penza 440000, Russian Federation
[email protected]
2 Maternity Hospital in Krasnodar, Krasnodar, Russian Federation 3 Kuban State Medical University, Krasnodar 350063, Russian Federation
Abstract. A promising technology for determining the parameters of the pulse wave characterizing the human cardiovascular system state is photoplethysmography. The photoplethysmographic measuring system makes it possible to estimate the pulse wave propagation velocity and, accordingly, vascular stiffness by converting LEDs optical signals that have passed through certain skin areas. In practice, the technological spread of the radiation wavelength in the production of LEDs can reach a value of ±15 nm. Therefore, there is a need to reject LEDs along the radiation wavelength, which increases the sensor cost. The measuring blood flow parameters low accuracy is due to the high methodological error of the known photoplethysmographic measurement methods, which do not take into account the exact path of the light flux passage through the area layers on which the sensor is installed. At the same time, the light and photo diodes mutual arrangement is irrational, which reduces the measurements accuracy. The authors adapted technologically proven technical solutions of optoelectronics means of measuring movements and liquid level to determine the pulse wave propagation velocity. In the developed measuring system, two LEDs are installed (one in red, the second in the near infrared wavelength range) at the calculated angle and at the calculated distance relative to the radiation receiver. The basic mathematical relations linking the structural and technological parameters of the optoelectronic system are determined. The conversion function linearization method has been developed, providing an increase in the sensitivity of the conversion, based on a constructive method of reducing the luminous flux intensity at the beginning of the measurement range when the vessel is not expanded, and increasing it at the end of the measurement range when the vessel is expanded. The proposed photoplethysmographic measuring system, including a new optoelectronic system, provides non-invasive measurement of the pulse wave propagation velocity with high accuracy (measurement error is reduced to 5%), simple and technologically advanced in manufacturing, does not require complex technological, adjustment and measurement operations in the manufacture of the measuring system optoelectronic part, has a low cost component base: red and infrared LEDs and photodiodes. #CSOC1120. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 128–140, 2023. https://doi.org/10.1007/978-3-031-35311-6_15
Technological Approaches to the Increased Accuracy
129
Keywords: Photoplethysmographic Measuring System · Pulse Wave Propagation Velocity · Plethysmogram · Blood Vessel Filling · Optoelectronic Photodetector System · Radiation Source · Non-Invasive Assessment
1 Introduction Noninvasive assessment of the cardiovascular system (CVS) functional state is relevant due to the large number of cardiovascular diseases, the frequency and severity of complications from them. It is known that the CVS functional state is determined not only by the work of the heart and blood pressure, but also by the vessels condition - the stiffness (elasticity) of their walls, vascular resistance, etc. [1]. The CVS state also characterizes the pulse wave propagation velocity (PWV) [2]. This parameter is recognized as an important prognostic parameter of cardiovascular complications [3]. The higher the vessels stiffness, the faster the pulse wave spreads through them. In accordance with the American Heart Association recommendations, PWV is recommended as the main parameter for assessing the arterial walls stiffness [4, 5]. PWV should be evaluated both in large main vessels (for example, the aorta) and in smaller peripheral vessels along the blood flow along the patient’s entire body, since PWV is different in them (in the aorta about 4…6 m/s, in the radial artery 8…12 m/s) [6]. The blood release from the ventricle during the heart muscle (systole) contraction creates a wave of increased pressure that spreads throughout the body from the aorta to the smallest vessels (capillaries). Spreading through elastic arteries and arterioles, the pulse wave causes a short-term expansion of the vascular wall [7], which is fixed, so the first peak is formed on (Fig. 1) [8]. The pulse wave second peak occurs due to the reflection of the wave from the places of large vessels branching. The intensity of this reflection depends on the vascular tone at the branching points, and the reflection time directly depends on the PWV. Knowing the pulse wave shape, it is possible to assess the functional state and structural changes of the peripheral vascular bed. The “single-point” and “two-point” methods for determining PWV and measuring instruments that implement them are known [9]. The lower cost and simplest concerning hardware implementation are “single-point” methods and measuring instruments. They are based on the morphological analysis of the pressure pulse wave (PW) shape according to the recorded plethysmogram at the selected “point” of the human body [10] and determine the PWV by the time of propagation in the vessels of the reflected PW, which is displayed on the plethysmogram on the shape of the curve in the form of a second peak following the first peak of the straight line immediately after the decrease-incisure characterizing the moment of aortic valve closure [11]. To calculate the pulse wave indexes it is necessary to set markers indicating the pulse curve characteristic points (Fig. 2): B1 - the cardiac cycle beginning, B2 - the moment
130
T. I. Murashkina et al.
Fig. 1. The pulse wave shape.
of maximum expansion of the vessel during the expulsion phase, B3 - the point that corresponds to the protodiastolic period, B4 - the diastole beginning, B5 - the cardiac cycle end. The contour analysis of the pulse curve is carried out taking into account the data in Table 1. The most common “single-point” method is used to determine pulse wave velocity (PWV) in the aorta – the aortic velocity of pulse wave. The spreading PW blood pressure in the aorta after cardiac ejection undergoes reflection from the branches of the aorta, mainly from the lower branch to the iliac arteries. As a result, two characteristic peaks (ridges) appear on the plethysmogram: the peak of the straight line 1 and the peak of the reflected wave 2, separated by the incisor 3 (see Fig. 1). The time T of the peak delay of the reflected wave relative to the time of arrival of the peak of the direct wave characterizes the time of propagation of PW along the aorta from the heart to the point of branching of the aorta and back to the heart to the place of registration of the plethysmogram on the aorta. This “pattern” spreads further along the smaller arteries, up to the smallest arterioles. Based on the sensor registered plethysmogram, the PWV in the aorta is determined by the formula (1). PWVa =
2Lk , T
(1)
where k - is the scale factor for normalizing the obtained value; L - is the length of the aortic trunk, T - is the delay time of the peak of the reflected wave relative to the peak arrival time of the direct wave on the plethysmogram [9]. The blood viscosity, the vessel walls stiffness, the pathologies presence and other factors have a “blurring” effect on the PW envelope shape, its maximums are shifted, the reflected wave T peak delay time changes [12]. “Two-point” methods are more accurate, when a photoplethysmogram is taken at two different “points” placed at a certain distance from each other on the body surface, the
Technological Approaches to the Increased Accuracy
131
Fig. 2. Applying of two red and near infrared LEDs and a photodiode in photoplethysmographic sensors.
delay T of the propagation time of the PW line between these two points is determined, and according to this delay, taking into account the known distance between the “points”, PWV is determined as the distance divided by T [13]. In this variant, the time delay T is determined between the ECG signal pulse and the peak arrival of the PW line recorded in the artery by the pressure sensor. This is the most accurate way to determine PWV of all considered, but it is expensive and invasive. In addition, it doesn’t allow to completely evaluate the integral PWV for the purposes of all vessels stiffness integral assessment, since it does not involve small arteries on the periphery of the limb, and also does not measure blood pressure simultaneously with PWV. The most accurate “two-point” method is the method in which PWV is determined by the time delay of pulse registration on the carotid and femoral arteries [9, 14]. But the method and devices implementing this method have serious drawbacks. First of all, the use of an ECG module makes such devices complex, bulky and expensive. Secondly, it requires special skills. For instance, it needs to position the ultrasound sensor correctly on the carotid and femoral artery. The most promising methods are those using optical photoplethysmographic sensors (PPG sensors) for pulse wave registration, which register changes in light transmission/light reflection of tissues depending on the large vessels blood filling (arteries and arterioles) during the cardiac cycle. During the blood release by the heart, the blood flow in the tissues increases, causing vasodilation, as a result, the tissues change their absorbing, reflecting and refractive properties, which causes a change in the pulse wave shape.
132
T. I. Murashkina et al. Table 1. Plethysmogram indicators.
Indicator/Description/Dimension
Norm
The stiffness index (SI) is a parameter that clearly correlates with the pulse wave propagation velocity - a marker of arterial stiffness/rigidity, m/s
5–9
Reflection index (RI) - reflects mainly the tone of arterioles and small vessels, indirectly indicates the presence of atherosclerotic deposits (increased reflections), %
40–70
Pulse wave amplitude (anacrotic phase amplitude) PWA Dicrotic wave amplitude DWA
0.5 × PWA
The incisure height (IH)
(2/3) × PWA
Dicrotic wave index DWI %
50–70
Pulse wave anacrotic phase duration APD, s Pulse wave dicrotic phase duration DPD, s The expulsion phase duration (EPD) is a parameter reflecting diastolic activity, s Pulse wave duration PWD, s
0.7–1.1
The ascending wave index (AWI) is a parameter reflecting the filling phase in 15–30 the systolic period of the cardiac cycle, corresponds to the duration ratio of the ascending segment of the anacrotic wave to the pulse wave total duration, % The filling time (FT) (corresponds to the interval from the pulse wave beginning to the top of the anacrotic wave), s
0.06–0.2
The cardiac cycle systolic phase duration, s
0.35–0.55
The cardiac cycle diastolic phase duration, s
0.4–0.6
Pulse wave reflection time WRT (corresponds to the time of myocardial relaxation in the protodiastolic phase), s
0.2–0.4
Heart rate, beats, s
55–85
To register the change in absorption in PPG sensors, two LEDs are usually installed (one in red, the second in the near infrared wavelength range) in order to record not only the change in light absorption by tissues, but also to register the difference in the absorption of light of a red and IR source (Fig. 2) [15–17]. Problem 1. The high steepness of the absorption spectral characteristic in the region of red and infrared radiation requires a small spread of the central wavelength of LED radiation. For the red range, the radiation wavelength should be within (660 ± 5) nm, for the infrared - (940 ± 10) nm. In practice, the technological spread of the radiation wavelength, in the production of LEDs, can reach a value of ±15 nm. Therefore, there is a need to reject LEDs by the radiation wavelength, which increases the sensor cost [17]. In [18], this problem solved by the calibration channel introducing, which significantly complicates the sensor circuit design and the technological process of its manufacture.
Technological Approaches to the Increased Accuracy
133
Problem 2. The low accuracy of blood flow parameters measuring is due to the high methodological error of the known photoplethysmographic measurement methods, which do not take into account the exact path of the light flux passage through the layers of the area on which the sensor is installed. At the same time, the light diodes and photodiodes mutual arrangement is irrational, which reduces the measurements accuracy. Attempts to experimentally adjusting the relative positions leads to a significant decrease in the design manufacturability, and sometimes to the complete inoperability of such sensors. The aim of the work. Development of technological approaches to the design of a photoplethysmographic measuring system of increased accuracy based on the use of structurally and technologically proven technical solutions of optoelectronic measuring instruments.
2 Methods To improve the accuracy and reliability of measurements, the basic laws of geometric optics are taken into account: reflections, refractions at the boundaries of media, which are separate layers of the measuring area on the patient’s body, as well as scattering on inhomogeneities (Fig. 3) [19]. To select the radiation source (its spectral characteristics), the individual layers transparency is taken into account, for example, the skin greatest transparency lies in the range of 650–1200 nm [20]. When measuring blood filling, it is necessary to measure the distance to the mediums partition boundary, which are the human body surface layers (Fig. 4), therefore, it is proposed to adapt technologically proven technical solutions of movements measuring optoelectronic means, described in [21, 22]. The optical movement sensor, described in [21], uses a photodetector located on the optical axis of the sensor, four LEDs, each of which is installed in the focus of its collecting lens. To increase the accuracy of measuring the reflecting surface movement, the optical axes of the LEDs are located at the calculated angle α to the optical axis of the photodetector. The drawback of this sensor is that its design parameters are calculated from the condition of measuring the movement of the reflecting surface in a homogeneous medium, and, accordingly, are not designed to measure the vessels blood filling. In this case, the sensor will have a low sensitivity of optical signal conversion and, accordingly, low measurement accuracy. The closest in terms of circuit design and technological realization to the proposed photoplethysmographic measuring system is an optoelectronic measuring system for measuring the liquid level, the principle of operation of which is described in [22–24].
134
T. I. Murashkina et al.
Fig. 3. Skin scheme 1.
Fig. 4. Skin scheme 2.
3 Theoretical Provisions The authors adapted this system to achieve the aim, since in this system, as in a blood vessel, the distance to the medium partition boundary, one of which is a liquid, changes (Fig. 5).
Technological Approaches to the Increased Accuracy
135
The optoelectronic system for measuring vessel blood filling contains a photodetector, a radiation source (LEDs) located symmetrically relative to the optical axis of the photodetector at an α angle. Each LED stands in the focus of the collecting lens for a parallel light beam forming. The photodetector is located relative to the collecting lens at a distance less than the focal length of the lens in order to form a light spot commensurate in size with the photodetector receiving area size. As in the well-known PPG sensors, red and IR LEDs are used. The optoelectronic system for measuring the blood filling of a vessel works as follows. Beams of parallel rays pass at an angle α through the first medium (skin) with a refractive coefficient n1 h path l1 = cosα , where h – the distance from the radiating surface of the lens to the two medium partition boundary (“skin - blood vessel”). Refracted at the medium partition boundary, the rays at an angle β pass through the second medium (blood) with a refractive coefficient n2 X path l2 = cosβ , where X – the distance between the upper and lower boundaries of the blood vessel at the time of measurement; β = arcsin( nn21 sin α) to the lower border of the blood vessel. The reflected light stream passes back to the photodetector. In this case, the power of optical radiation is weakened in accordance with Booger’s law [25]. P = ρ0 ρ1 ρ2 τ02 τ12 P0 ,
(2)
where P0 , P – the power of optical radiation at the beginning and at the end of the route, respectively; ρ0 – the skin outer surface reflection coefficient; ρ1 – the blood vessel upper surface reflection coefficient; ρ2 – the blood vessel lower surface reflection coefficient; τ0 , τ1 – transparency coefficients of the first medium - skin and the second medium blood, respectively; τ0 = exp(−χ1 l1 ),
(3)
τ1 = exp(−χ2 l2 ),
(4)
where χ1 , χ2 – extinction coefficients (optical radiation loss coefficient due to light absorption and scattering) of the first and second media, respectively; l1 , l 2 – the path traversed by the light flux in the first and second media, respectively. Taking into account dependencies (3) and (4), expression (2) will be rewritten: P = ρ × exp(−2(χ1 l1 + χ2 l2 ))P0 ,
(5)
where ρ = ρ0 ρ1 ρ2 . Thus, the dependence P = f (X ) is nonlinear (exponential). In addition, if the losses in the media are significant, then the sensitivity of the conversion will be small.
136
T. I. Murashkina et al.
Fig. 5. Graphical constructions to explain the method of measuring change in the sizes of blood vessel
4 Results To linearize the dependency P = f (X ) and to increase the sensitivity of the conversion, it is necessary to enter the coefficient K(X) to expression (5). P = K(X ) × ρ × exp(−2(χ1 l1 + χ2 l2 ))P0 .
(6)
In this case, it is necessary that the nonlinearity of the first multiplier exp(−2(χ1 l1 + χ2 l2 )) was compensated by the nonlinearity of the second multiplier K(X). This can be achieved by changing the steepness of the dependence K = f (X) by choosing the parameters α and d at the design stage, where d – is the distance between the optical axes of the radiation sources and the receiver (Fig. 6). The conversion output function linearization method has been developed, providing an increase in the sensitivity of the conversion, based on a constructive method of reducing the luminous flux intensity at the beginning of the measurement range (when the vessel is not expanded) and increasing it at the end of the measurement range (when the vessel is expanded). This is achieved by shifting the light spot reflected from the medium partition boundary with different refractive coefficients relative to the photosensitive surface of the radiation receiver (RR) (see Fig. 6).
Technological Approaches to the Increased Accuracy
137
To do this, the light rays from the radiation source with a radius of r II are directed to the blood vessel direction in such a way that, when the vessel is narrowed, they converge at a point lying on the optical axis at an estimated distance L from the radiation receiver. With such a course of rays, a light spot with an area of S II in the receiving plane (at the end of the route), equal to the reflected spot with an area of S OTR , will move relative to the receiving area of the photodetector in the Z direction. At the same time, the radiation receiver area illuminated by the reflected light flux will change, that is SOSV = f (X ). Then K(X ) =
SOSV . SPI
Accordingly: P SOSV = ρ exp(−2(χ1 l1 + χ2 l2 )) . P0 SPI
(7)
SPI = π (RPI )2 where RPI – the photodetector radius The photosensitive area of the receiving optical system can be rectangular (square) or round. If the photosensitive area of the receiving optical system is rectangular (square), then SPI = b × h, where b, h – rectangle length and width (see Fig. 6). If the area is round, then SPI = π (RPI )2 , where RPI – the photodetector radius. In the first case, in accordance with Fig. 6, SOSV is an area formed by the mutual intersection of a circle with radius r II and a chord AB of a length and when Z < r II SOSV = πrII2 −
rII2 π a a arcsin − sin(2arcsin ) , 2 90 2rII 2rII
(8)
when Z > r II SOSV =
rII2 π a a arcsin − sin(2arcsin ) , 2 90 2r II 2r II
(9)
where a = AB = 2 rII2 − (Z − rII )2 = 2 2rII Z − Z 2 ,
(10)
where Z = 0…2r II . Taking into account dependencies (8) and (9), expression (7) will be rewritten: when Z < r II P 1 = ρ exp(−2(χ1 l1 + χ2 l2 )) P0 bh 2 rII π 2rII Z − Z 2 2rII Z − Z 2 arcsin × − sin 2arcsin , 2 90 rII rII
(11)
138
T. I. Murashkina et al.
Fig. 6. Geometric constructions for determining the structural and technological parameters of an optoelectronic blood filling measurement system.
when Z > r II P 1 = ρ exp(−2(χ1 l1 + χ2 l2 )) × π rII2 P0 bh 2 2 rII π 2rII Z − Z 2rII Z − Z 2 arcsin − − sin 2arcsin . 2 90 rII rII
(12)
If the optical signal intensity at the radiation source output is taken as 100%, the radiation receiver will receive the luminous flux with an intensity of 10… 15%. For instance, the power of an IR LED is 100 mA, then a light flux of 10…15 mA will enter the photosensitive area, which is quite enough for the good functioning of the entire optoelectronic measuring system.
Technological Approaches to the Increased Accuracy
139
5 Conclusion The proposed photoplethysmographic measuring system, including a new optoelectronic system, provides non-invasive measurement of PWV with high accuracy (measurement error is reduced to 5%), simple and technologically advanced in manufacturing, does not require complex technological, adjusted and measuring operations in the manufacture of the optoelectronic part of the measuring system, has a low cost component base: red and infrared light diodes and photodiodes. Acknowledgements. The research is carried out with the financial support of the Russian Science Foundation and the Kuban Scientific Foundation within the framework of the scientific project № 22-15-20069.
References 1. Karo, K., et al.: Mechanics of Blood Circulation. Mir, Moscow (1981) 2. Herman, I.: Physics of the Human Body. Publishing House “Intellect”, Dolgoprudny (2011) 3. Boutouyrie, P., et al.: Aortic stiffness is an independent predictor of primary coronary events in hypertensive patients: a longitudinal study. Hypertension 39(1), 10–15 (2002). https://doi. org/10.1161/hy0102.099031 4. Townsend, R.R., et al.: American heart association council on hypertension. Recommendations for improving and standardizing vascular research on arterial stiffness. A scientific statement from the American Heart Association. J. Hypertension 66(3), 698–722 (2015) 5. Tkachenko, Y., et al.: Adaptation of pulse wave velocity measurement technique for outpatient screening. Clin. Pract. 10(1), 48–56 (2019) 6. Sokolov, A.A., Soldatenkov, M.V.: Increase in arterial resistance as a determining factor in the development of isolated systolic arterial hypertension. Arterial Hypertension 13(1), 7–10 (2007) 7. Usanov, D.A., Skripal, A.V., Vagarin, A., Rytik, A.P.: Methods and Equipment for Diagnosing the State of the Cardiovascular System According to the Characteristics of the Pulse Wave. Saratov University Publishing House, Saratov (2009) 8. Gerashchenko, M.S., Markuleva, M.V., Semenov, A.D., Gerashchenko, S.I.: The brachial artery oscillometry process modeling with the pneumo and hydro-cuff technology applying. In: Proceedings of 2021 IEEE Ural-Siberian Conference on Computational Technologies in Cognitive Science, Genomics and Biomedicine (CSGB), pp. 200–204. IEEE, NovosibirskYekaterinburg, Russia (2021). https://doi.org/10.1109/CSGB53040.2021.9496041 9. Tkachenko, Y., et al.: Adaptation of the pulse wave measurement technique for screening examinations in outpatient practice. Clin. Pract. 10(1), 48–56 (2019). https://doi.org/ 10.17816/clinpract10148-56 10. Fedotov, A.A.: Methods of morphological analysis of the pulse wave. Med. Technics 4, 32–35 (2019) 11. Parfenov, A.S.: Express diagnostics of cardiovascular diseases. World Meas. 6, 74–82 (2008) 12. Vasyuk, Y., Baranov, A.A.: Agreed opinion of Russian experts on the assessment of arterial stiffness in clinical practice. Cardiovasc. Ther. Prev. 15(2), 4–19 (2016) 13. Rogatkin, D.A., Lapitan, D.G.: Method and device for measuring pulse wave velocity when measuring blood pressure by oscillometric method. Pat. of the Russian Federation No 2750745 (2020)
140
T. I. Murashkina et al.
14. Van Bortel, L.M., Laurent, S., Boutouyrie, P., et al.: Expert consensus document on the measurement of aortic stiffness in daily practice using carotid-femoral pulse wave velocity. J. Hypertens. 30(3), 445–448 (2012) 15. Allen, J.: Photoplethysmography and its application in clinical physiological measurement. Physiol. Meas. 28(3), R1–R39 (2007) 16. Rogatkin, D.A., Lapitan, D.G.: Device for non-invasive measurement of blood microcirculation flow. Pat. of the Russian Federation No 2636880 (2017) 17. Ivlev, S.V., Tarasov, A.A.: Device for measuring the level of oxygenation and pulse rate. Pat. of the Russian Federation No 2294141 (2005) 18. Michael, F., et al.: Oximeter sensor with digital memory recording sensor data. Pat. of the Russian Federation No 20030195402 (2006) 19. Demidov, A.V., Papshev, D.V., Krivonogov, L.Y.: Principles of construction, structure and features of the ECG and blood pressure monitoring system. In: Proceedings of 2020 Moscow Workshop on Electronic and Networking Technologies (MWENT), pp. 1–5. IEEE, Moscow, Russia (2020). https://doi.org/10.1109/MWENT47943.2020.9067390 20. Korobov, A.M., Korobov, V.A., Lesnaya, T.A.: Korobov’s phototherapy devices of the Barva series. IPP “Contrast”, Kharkov, Ukraine (2010) 21. Murashkina, T.I., Presnyakov, O.V.: Optical displacement sensor. Pat. of the Russian Federation No 2044264 (1992) 22. Graevsky, O.S., Serebryakov, D.I., Murashkina, T.I.: Fiber-optical liquid level measurement system. In: Proceedings of the International Symposium “Reliability and Quality”, vol. 1. (Penz Publishing House. State University, Penza), pp. 394–395 (2009) 23. Sola, J., Bertschi, M., Krauss, J.: Measuring pressure: introducing oBPM, the optical revolution for blood pressure monitoring. IEEE Pulse 9(5), 31–33 (2018). https://doi.org/10.1109/ MPUL.2018.2856960. PMID: 30273141 24. Kachuee, M., Kiani, M.M., Mohammadzade, H., Shabany, M.: Cuffless blood pressure estimation algorithms for continuous health-care monitoring. IEEE Trans. Biomed. Eng. 64(4), 859–869 (2017). https://doi.org/10.1109/TBME.2016.2580904 25. Yakushenkov, Y.: Theory and Calculation of Optoelectronic Devices. Soviet Radio, Moscow (1980)
A Multi-criteria Method for the Synthesis of Regional and Interregional Tourism Routes Leyla Gamidullaeva(B)
and Alexey Finogeev
Penza State University, Penza, Russia [email protected]
Abstract. The tourism industry has undergone dramatic changes since various forms of information and communication technologies began to penetrate society, industry, and markets. Currently, the development of promising methods and tools for constructing the preferred optimal financial and time costs for tourism routes is an urgent scientific and practical issue. In this article, the problem of synthesizing the spectrum of possible optimal tourism products is solved using graph routing synthesis methods, and multi-criteria optimization and decision-making. The optimization criteria are primarily selected depending on the mode of tourism transportation. The authors have developed a methodology for clustering tourist profiles (avatars), being cyber-physical systems that specify the key parameters (preferences), selected routes, and the actual scheme of movement and implementation of a tourist route. In order to solve the problem of clustering digital tourist avatars, the authors have implemented a combined approach based on fuzzy logic methods and a neural network. The obtained results make a certain contribution to the development of a promising research area related to the design of an intersectoral digital tourism ecosystem based on a public-private partnership, taking into account digitalization and integration of particular information systems of federal departments, regional authorities, and public services. Keywords: Digitalization · Internal Tourism Product · Multi-Criteria Method · Digital Avatars · Management Optimization · Tourism Route · Fuzzy Clustering Algorithm
1 Introduction Currently, the issues of managing the tourism industry in the Russian Federation, determining priority areas for its long-term development, identifying ongoing challenges and ways to solve thereof are becoming increasingly relevant. In the modern theory and practice of tourism industry management, the issues of developing tourism routes using the latest information technologies and systems require a deeper study due to the following conditions. Firstly, in the Russian Federation, active work is being carried out both at the state level and at the level of individual regions aimed at developing and forming an assortment of international, interregional and regional tourism products, including national and branded ones. Besides, the projects are being © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 141–151, 2023. https://doi.org/10.1007/978-3-031-35311-6_16
142
L. Gamidullaeva and A. Finogeev
developed to promote tourism products and territories. Thus, one of the priority areas of the Strategy for the Development of Tourism in the Russian Federation for the period up to 2035 is the all round development of domestic and inbound tourism by creating conditions for the formation and promotion of a quality tourism product being competitive on the domestic and global markets, taking into account the cultural, natural and ethnic diversity of the regions of Russia [1]. In general, the quality of the developed tourism products determines the demand for travel in a particular tourist area, and its image in the tourism market. Tourism and Hospitality Industry National Project also provides for the creation of high-quality tourism products [2]. Secondly, in order to form a range of competitive tourism products, it is vital to create algorithms for the development of innovative tourism products at macro-, mesoand micro-levels, taking into account the roles and interests of all participants in the tourism industry (tour organizers and providers of tourism services). Thirdly, the ever changing conditions for the functioning of the tourism industry affect the transformation in consumer preferences and complicate the process of managing tourism products. The current geopolitical situation and the followed COVID-19 pandemic have led to an increase in demand for domestic tourism products among Russian tourists. To maintain this demand, it is important to offer products that are competitive in the global market to meet the needs of consumers and form their satisfaction and loyalty. Finally, there is an active introduction of digital technologies into the operation of tourism industry enterprises, as well as a digital transformation of customer experience. The use of digital technologies, including Industry 4.0 technologies (blockchain, Internet of Things, artificial intelligence, robotization of processes and large data array processing, virtual and augmented reality, etc.), can simplify the process of managing the tourism product life cycle and consider the interests of all participants. One of the directions of the Strategy for the Development of Tourism in the Russian Federation for the period up to 2035 is the introduction of digital technologies [1]. The above conditions determine the relevance and practical significance of this study in order to develop territorial tourism. A particular attention was paid to the Strategy for the Development of Tourism in the Russian Federation for the period up to 2035 and Tourism and Hospitality Industry National Project that consider the process of creating a competitive tourism product as a tool for tourism development [1, 2]. The authors highlighted the strategies, state programs and roadmaps for tourism development in different regions of the Russian Federation that consider the key aspects and different approaches to the development of regional tourism products: Strategy for the Development of Domestic and Inbound Tourism in the Sverdlovsk Region for the period up to 2030 [3]; Roadmap for Development – Heritage Conservation and Tourism: Promoting Sustainable Growth along the Silk Roads Heritage Corridors [4]; Tourism Development Strategy in the Penza Region for the period up to 2035 [5]; State Program for Tourism Development in Primorsky Krai for 2020–2027 [6]. Besides, the works of Russian and foreign authors devoted to the concept of a tourism product and the creation of tourism product models were analyzed in terms of tour organizers, tourist territories and consumers. Some authors term a tourism product as a
A Multi-criteria Method for the Synthesis of Regional and Interregional Tourism
143
service or a set of services (at least two) that are interconnected and differ in functional features [7, 486–493]. Such a definition correlates with that given in the Federal Law on the Fundamentals of Tourism Activities in the Russian Federation [8]. It is relevant to be considered in terms of the activity of tour organizers in the market as a result of travel agency and tour operator business. It emphasizes that the nature of developing a tourism product is to combine various tourism services. Other researchers consider a tourism product as a complex and abstract product, consisting of various tangible and intangible elements such as goods, works and services, intangible benefits, information, intellectual property, etc. necessary to meet the needs of tourists during the trip [9–12]. This definition is relevant in the context of territorial tourism development. In this case, a tourism product is understood as a detailed tourism route, which is formed by an intelligent system in an interactive mode based on data mining, a traveler preference matrix, data for the synthesis of new tourism products, and data on the recently formed products of other tourists. In this article, the authors have focused on the development of a method for constructing regional and interregional tourism routes.
2 Theoretical Background The problem of optimizing vehicle routes in order to reduce energy consumption and environmental pollution considered in [13] is an example in the field of application of industrial information integration methods for solving transport problems. A multicriteria optimization model for a set of routes using the elitist non-dominated sorting genetic algorithm is proposed to minimize the cost and time expenses in the process of vehicle movement, and to reduce carbon emissions. The final route is selected in the decision-making process according to the gray relational analysis method and information entropy estimation. A two-phase hybrid algorithm for solving the problem of vehicle routing in order to optimize logistics and planning is proposed in [14]. A multi-objective model for optimizing movement between multiple points to minimize the total number of routes, travel costs, and reduce the number of long routes is suggested in [15]. A multi-purpose optimal model for vehicle routes with a minimum total cost, carbon emissions and a minimum probability of an accident risk, taking into account environmental factors, is presented in [16].
3 Methods 3.1 A Multi-criteria Method for the Synthesis of Tourism Routes A synthesized detailed tourism product incorporates start/end time and place of the trip; a mapped route track indicating numerous key points (attractions, places of stops, parking, overnight stays, meals, excursions, etc.) with geospatial tags; time lags between the key points of the movement track; cost parameters of movement (transport tickets, fuel) and visits to key points (entry, excursion service); tourist information (personal, identification, financial, etc.). An algorithm for developing an individual tourism product is as follows: 1. Development of a tourist preference questionnaire-based survey required for the development of a tourist offer.
144
L. Gamidullaeva and A. Finogeev
2. System proposal of optimal alternatives for the implementation of a tourist route based on the traveling salesman problem (images of potential tourist routes are mapped). 3. Formation of a blockchain containing information about personal tourist preferences and the chosen routes. Linking the created blockchain to the tourist phone number for the consistent digital avatar formation. 4. Construction of a digital avatar for a tourist profile, being a cyber-physical sys-tem that reflects its key parameters (preferences), selected routes, and the actual scheme of movement and implementation of a tourist route. 5. Clustering digital avatars. Each cluster brings together tourists with similar preferences combining the latter based on averaging key parameters. This allows the sys-tem to offer recommendations for synchronizing tourist routes with those of digital avatars with which the user has previously established communication (calls, messages, etc.). 6. Clustering digital avatars for each type of tourism takes place in two stages: 1) clustering based on the preferences of other digital avatars of tourists; 2) clustering according to the accumulated individual tourist experience for subsequent individualization of the tourism product offer. 7. Building a multilayer fuzzy neural network (neural network method) of two types: 1) a neural network that is trained on the basis of clustering digital avatars of tourists; 2) a neural network that is trained on the basis of the tourist digital avatar, optimally adapting to the individual tourist preferences and past experience in implementing tourist routes using this recommender system. The problem of synthesizing the spectrum of possible optimal tourism products is solved using graph routing synthesis methods, and multi-criteria optimization and decision-making. The optimization criteria are primarily selected depending on the mode of tourism transportation (by transport, on foot, etc.). Any traveler faces a prerequisite to build the fastest or cheapest route to visit a certain number of attractions within definite period of time, having the shortest distance between the given points, etc. As can be seen, the conditions for solving the problem are broad and it seems difficult to find a single option. Given numerous options, it appears hard to find the best one according to some criterion, being uneasy and resource-intensive computation. Traveling salesman problem is an option for constructing a route, when the initial conditions are written in the form of a matrix, the rows correspond to the points of departure, and the columns match the places of arrival. The distance between two points in the matrix (edge) corresponds to a vector with values determined by the following parameters: time, distance, and cost. To solve the traveling salesman problem, it is necessary to use an asymmetric variation with varied length of the edges, and a closed variant, with finding the shortest path between all vertices with a return to the starting point. Since it appears difficult to travel within cities (streets that intersect, house numbers, and a variety of transportation, etc.), there is a great number of options that require extensive calculations. The simplest solution to this problem is the brute-force method, when all possible route options are considered and, subsequently, an optimal one is chosen. It is also feasible to use the method of random enumeration, dynamic programming, genetic algorithm, neural networks, etc.
A Multi-criteria Method for the Synthesis of Regional and Interregional Tourism
145
3.2 A Methodology for Clustering Tourist Profiles (Avatars) The traveler preference matrix being filled, the problem of fuzzification of qualitative and fuzzy variables is solved in order to proceed to a set of digital tourist avatar characteristics. Each avatar is represented as a point in a multidimensional space, which makes it possible to proceed to the clustering of tourists according to their preference matrices. In general, the preferences are set by fuzzy variables or membership functions due to changeable preferences of an individual or several tourists depending on their options for a new or updated route. This reduces the problem of grouping digital avatars to the problem of fuzzy clustering. A combined approach based on fuzzy logic and a neural network is implemented for such clustering. To train a neural network, the previously filled preference matrices of digital tourist avatars and the results of tourism clustering are used. The parameters of the avatar membership function to the existing clusters are adjusted by means of a neural network training algorithm, and the conclusion about the membership and the degree of similarity of digital avatars is formed using a fuzzy logic apparatus according to their preference matrices. The fuzzy clustering method based on the fuzzy c-means algorithm [17, 18] consists of the following steps: 1. The number of avatar clusters M is set, which is further corrected in the learning process, and the degree of fuzziness of the objective function m > 1 is selected. 2. The input values of quantitative characteristics of the tourist profile (digital avatar) denote feature vectors X j ( j = 1,…, N). The vector defines a point in space that can refer to clusters with centroids C (k) (k = 1,…, M) with a probability membership (k) (k) (k) function μXj , where 0 < μXj < 1, M k=1 μXj = 1, which acts as the degree of (k)
proximity to the centroid and is determined by the distance DXj . 3. At the first step, the points are randomly distributed among clusters, and the distribution of points being determined by the proximity matrix to centroids in the feature space, which are the characteristics of avatars (x i ,…, x n ). 4. The coordinates of cluster centroids C k (k = 1,…, M) are determined by calculating the average proximity of the cluster points (the average coordinates are the characteristics of the digital avatar tourist profile of the whole cluster): M (k) k=1 μXj Xj . Ck = (1) (k) M k=1 μXj 5. The distances between points and centroids of clusters are determined: (k) DXj = Xj − C (k) 2 .
(2)
146
L. Gamidullaeva and A. Finogeev
6. The grades of point membership in the clusters are recalculated, and the point distribution matrix is updated: 1
(k)
μXj = M k=1
(k) DX i (k) DX j
2/ m−1
,
(3)
where m > 1 is the clustering fuzziness coefficient. (k) 7. To stop the iterative process, the parameter ε > 0 is set. If the condition {|μXj − (k−1)
μXj
|} < ε is not satisfied, then go to Step 5.
4 Results The fuzzy clustering algorithm makes it possible to determine probabilistic attribution of an avatar to specific clusters. Due to the probable determination vagueness of tourist preferences or their multiplicity determined by the wishes like “I want to go somewhere to the sea, it doesn’t matter where”, the attribution can be determined to several clusters at once. The probabilistic avatar attribution determines the degree of similarity with the centroids of specific clusters at a given time and allows the user to make a decision in the future when choosing tourism products related to this cluster. If the proposed products with fuzzy or multiple preferences are not suitable for a tourist, then, at the next step, the latter will be offered the products of the nearest cluster most similar to the avatar. The clustering may result in the minimal degree of similarity of a tourist digital avatar with all existing cluster centroids, being equal to 0.3. In this case, this tourist profile is the centroid of a new tourism cluster, for which ready-made tourism products are not suitable, and it is necessary to synthesize new routes and link thereof with this cluster. The number of clusters is increased by one, and both the clustering algorithm and redistribution of feature vectors are performed again. Such an operation is performed during the initial filling of the database of tourist preferences and products with neither training episodes. Here, a new tourism product is formed for each tourist, and the latter immediately becoming a centroid of the cluster. If a new tourist correlates with the existing clusters and centroids by the degree of similarity of preferences, then the procedure for training the clustering algorithm using a fuzzy neural network is implemented. A network is a non-feedback five-layer structure having weight coefficients and activation functions. An adaptive Takagi-Sugeno-Kang model [19] was chosen as the basis. The output signal is determined by the aggregation function for M rules and N variables: M (wk ∗ yk (x)) , (4) y(x) = k=1 M k=1 (wk ) where yk (x) = zk0
N j=1 zkj xj is the i-th component of polynomial approximation; wi (k)
weights represent the degree of fulfillment for the conditions of the rule wk = μA (x j ).
A Multi-criteria Method for the Synthesis of Regional and Interregional Tourism
147
(k)
The membership function or fuzzification μA for x j variable is represented by the Gaussian function: ⎡ ⎤ ⎢ ⎢
N ⎢ (k) ⎢ wk = μA xj = j=1 ⎢ ⎢ ⎣
⎥ ⎥ ⎥ 1 ⎥, (k) 2b ⎥ j ⎥ (k) xj −cj ⎦
1+
(5)
(k)
σj
where k is the number of membership functions (k = 1…M); j is the number of variables (k) (k) (k) (N); cj , σj , bj are the parameters of the Gaussian function, defining its center, width and shape of the k-th membership function for the j-th variable. The rules of inference for the output variables Y = (y1 ,y2 ,…, yM ) for the set of X (k) = (x 1 ,x 2 ,…, x N ) variables, taking on Aj values, are the N × M matrix of membership function values: (1)
R1 : if x1
(M )
R1 : if x1
(1)
(1)
(1)
(1)
(1)
∈ A1 and x2 ∈ A2 and, . . . , and xN ∈ AN , then y1 (x) = z10 N j=1 z1j xj , ... ... ... (M ) (M ) (M ) (M ) (M ) ∈ A1 and x2 ∈ A2 and, . . . , and xN ∈ AN , then N zMj xj . yM (x) = zM 0
(6)
j=1
To reduce the computational complexity within the research framework, we assume that the number of rules corresponds to the number of membership functions, although they may differ. The fuzzy neural network has five layers (Fig. 1). In the first layer, fuzzification is carried out according to formula (3) for each x j (k) variable. In this case, the values of the membership function μA (xi ) are determined for each Rj rule: (k) μA xj =
1 1+
(k)
xj −ci
2b(k)
(7)
i
(k)
σi
(k)
In the second layer, the coefficients wk = μA (x) are determined by aggregating the values of x i variables. The wi parameters are passed to the 3rd layer, where they are multiplied by the yi (x) values, and also to the forth layer to calculate the sum of the weights. In the third layer, the values yi (x) = zk0 N j=1 zkj ∗xj are multiplied by the weight coefficients wk and calculated. The linear parameters zk0 and zkj are the functions of the rule consequences, and zk0 being considered as the center of the membership function.
148
L. Gamidullaeva and A. Finogeev
Fig. 1. Fuzzy neural network for digital avatar clustering.
The forth layer is represented by the neurons f 1 and f 2 that aggregate the results: M M N (k) (8) f1 = wk yk (x) = μA xj ck , k=1
f2 =
k=1
M k=1
wk =
M k=1
j=1
N
(k) μ xj j=1 A
(9)
The fifth normalizing layer is represented by a single neuron, where the weights are normalized and the output function is calculated: M N (k) M μA xj × ck k=1 j=1 f1 w × y (x) k k y(x) = = k=1 = . (10) M M N (k) f2 w μ x k=1 k k=1
j=1
A
j
The neural network contains the first and third parametric layers, in which the param(k) (k) (k) eter values are selected at the training stage. The first layer parameters cj , σj , bj are considered to be nonlinear, and the parameters of the third layer zkj are linear. The training is performed in two steps. At the first step, the third layer parameters of the membership functions are selected by fixing the individual parameter values and solving the system of linear equations: M N y(x) = zkj xj . (11) wk zk0 + k=1
j=1
A Multi-criteria Method for the Synthesis of Regional and Interregional Tourism
149
The output variables are replaced by the reference values DP (P is the number of training samples). The system of equations can be written in a matrix form: DP = W * Z. The solution of the system of equations is found by means of the pseudoinverse matrix W+: Z = W + DP. Further, after fixing the values of the linear parameters zkj , the vector Y of the actual output variables is calculated, and the error vector E = Y − DP is determined. At the second step, the errors are sent back to the first layer, where the parameters of the gradient vector of the objective membership function are calculated with respect to (k) (k) (k) cj , σj , bj parameters. Then, the membership function parameters are adjusted by the fast gradient descent method: (k)
(k)
cj (n + 1) = cj (n) − η (k)
(k)
∂E(n) (k)
∂cj
σj (n + 1) = σj (n) − η (k)
(k)
bj (n + 1) = bj (n) − η
,
∂E(n) (k)
∂cj
∂E(n) (k)
∂bj
(12)
,
(13)
,
(14)
where n is the iteration number; η is the learning rate parameter. The nonlinear parameters having been refined, the process of adaptation of linear and nonlinear parameters is started again. The iterative process is repeated until all process parameters are stabilized.
5 Conclusion Recently, the use of neural network and deep neural network methods [19–22] has become a key trend. Typically, deep learning algorithms use a sequence of non-linear processing layers to represent complex features [20]. The advantage of using a fuzzy neural network for learning and clustering is a high processing rate and consideration of tourist digital avatar parameters. The approach allows discovering typical avatars with ready-made tourism products being the solutions for selection or editing. It is also feasible to recognize atypical avatars and offer a design of a new tourism product. Further, a cluster with an atypical avatar and the related synthesized tourism route becomes a typical one and can be offered to other users. Future research is bound up with promising theoretical and methodological developments and system research in the field of designing a digital tourism cross-sectoral ecosystem based on public-private partnership, taking into account digitalization and integration of individual information systems of federal departments, regional authorities, and government services. Acknowledgment. This research was funded by the Grant from the Russian Science Foundation and the Penza region (Russia) (project No. 22-28-20524), https://rscf.ru/en/project/22-28-20524/.
150
L. Gamidullaeva and A. Finogeev
References 1. Strategy for the development of tourism in the Russian Federation until 2035 (approved by the order of the Government of the Russian Federation of September 20, No 2129-r.) (2019). https://www.garant.ru/products/ipo/prime/doc/72661648/. Accessed 17 Nov 2022 2. National project “Tourism and hospitality industry” (2021). https://tourism.gov.ru/deyate lnost/natsionalnyy-proekt-turizm-i-industriya-gostepriimstva/. Accessed 18 Nov 2022 3. Strategy for the development of domestic and inbound tourism in the Sverdlovsk region for the period up to 2030 (2014). https://mir.midural.ru/sites/default/files/documents/ppso_stra tegiya.docx. Accessed 17 Nov 2022 4. Roadmap for Development “Heritage and Tourism: Promoting Sustainable Development in the Heritage Corridors of the Silk Road” (2013). https://unesdoc.unesco.org/ark:/48223/pf0 000226408_eng. Accessed 17 Nov 2022 5. Strategy for the development of tourism in the Penza region for the period up to 2035 (2020). https://minkult.pnzreg.ru/tail/turizm-. Accessed 17 Nov 2022 6. The State Program of Primorsky Krai “Tourism Development in Primorsky Krai” for 2020–2027 (2019). https://invest.primorsky.ru/uploads/attachments/10-gp-razvitie-turizma903-pa-primorskogo-kraia.5e3a8cf045471.docx. Accessed 17 Nov 2022 7. Dracheva, E.L., Zabaeva, Yu.V., Ismaev, D.K., et al.: Economics and Organization of Tourism: International Tourism: Textbook; Ed. by I.A. Ryabova, Yu.V. Zabaeva, E.L. Dracheva, 4th edn. KNORUS, Moscow (2010) 8. Federal Law “On the basics of tourism activities in the Russian Federation” (1996), https:// base.garant.ru/136248/, last accessed 2022/11/17 9. Dzhandzhugazova, E.A.: Tourist and recreational design: a textbook for students. M.: Publishing Center “Academy”. Moscow (2014) 10. Kuskov, A.S., Golubeva, A.L.: Tour operating: textbook. M.: FORUM, Moscow (2009) 11. Ilyina, E.N.: Tour operating: organization of activities: textbook. M.: Finance and statistics, Moscow (2008) 12. Kuskov, A.S., Jaladyan, Yu.A. Fundamentals of tourism: textbook. M.: KNORUS, Moscow (2010) 13. Guo,K., Hu, S., Zhu, H., Tan, W.: Industrial information integration method to vehicle routing optimization using grey target decision. J. Ind. Inf. Integr. 27, 100336 (2022). https://doi.org/ 10.1016/j.jii.2022.100336. Accessed 17 Nov 2022 14. Abdirad, M., Krishnan, K., Gupta, D.: A two-stage metaheuristic algorithm for the dynamic vehicle routing problem in industry 4.0 approach. J. Manag. Anal. 8(1), 69–83 (2021) 15. Sánchez-Oro, J., López-Sánchez, A.D., Colmenar, J.M.: A general variable neighborhood search for solving the multi-objective open vehicle routing problem. J. Heuristics 26(3), 423–452 (2017). https://doi.org/10.1007/s10732-017-9363-8 16. Abdullahi, H., Reyes-Rubiano, L., Ouelhadj, D., et al.: Modelling and multi-criteria analysis of the sustainability dimensions for the green vehicle routing problem. Eur. J. Oper. Res. 292(1), 143–154 (2021) 17. Khang, T.D., Vuong, N.D., Tran, M.-K., Fowler, M.: Fuzzy C-means clustering algorithm with multiple fuzzification coefficients. Algorithms 13(7), 158 (2021). https://doi.org/10.3390/a13 070158. Accessed 19 Nov 2022 18. Bezdek, J., Ehrlich, R., Full, W.: FCM—the Fuzzy C-Means clustering-algorithm. Comput. Geosci. 10, 191–203 (1984). https://doi.org/10.1016/0098-3004(84)90020-7. Accessed 18 Nov 2022 19. Olej, V.: Design of the models of neural networks and the Takagi-Sugeno fuzzy inference system for prediction of the gross domestic product development. WSEAS Trans. Syst. 4(4), 314–319 (2005)
A Multi-criteria Method for the Synthesis of Regional and Interregional Tourism
151
20. Claveria, O., Monte, E., Torra, S.: Tourism demand forecasting with different neural networks models. IREA Working Paper 201321, University of Barcelona, Research Institute of Applied Economics (2013). https://doi.org/10.2139/ssrn.2507362. Accessed 18 Nov 2022 21. Kozlov, D.A.: Neuroagents in hospitality industry and tourism. ITportal 3(11) (2016). http:// itportal.ru/science/economy/neyroagentnye-tekhnologii-v-industr/. Accessed 18 Nov 2022 22. Karatzoglou, A., Hidasi, B.: Deep learning for recommender systems. In: Cremonesi, P., Ricci, F., Berkovsky, S., Tuzhilin, A. (eds.) Proceedings of the 11th ACM Conference on Recommender Systems 2017, RecSys, pp. 396–397, Como, Italy (2017)
Influence of Physical and Mechanical Properties of Optical Fibers on the Design Features of a Fiber-Optic Bending-Type Tongue Pressure Sensor on the Palate T. I. Murashkina1 , E. A. Badeeva1 , A. N. Kukushkin1 , Yurii A. Vasilev2(B) , E. Yu. Plotnikova2 , A. V. Olenskaya2 , and A. V. Arutynov2 1 Penza State University, Penza, Russia 2 Kuban State Medical University, Krasnodar, Russia
[email protected]
Abstract. The article determines the patterns that connect the heterogeneity of physical and mechanical properties of “quartz-quartz” optical fibers used in fiber-optic sensors of tongue pressure on the palate of the bending type with the strength and metrological characteristics of the sensors in operating conditions. #CSOC1120. Keywords: Fiber-Optic Sensor · Pressure · Tongue · Palate · Maxillofacial Anomaly · Optical Fiber · Data Acquisition and Conversion Module
1 Introduction In congenital upper lip and palate anomalies, patients have an altered position of the tongue in the mouth [1]. It shifts to the depth of the oral cavity, its back is elevated to the edges of the cleft palate, and the tip does not participate in the formation of sounds. In the oral cavity, the tongue exerts pressure on the surrounding tissues, since it is the strongest muscular organ of the oral cavity. The cleft tongue exerts pressure on the palate with its back, thereby pushing the non-fixed segments apart, which is unfavorable for the surgical stage of rehabilitation, since more soft tissues are needed to close the defect given their deficit and limited space. At the initial stage of diagnosis of the disease associated with congenital cleft lip and palate and other anomalies of the oral cavity, appropriate medical technical diagnostic tools are needed. Currently, for measuring the force (pressure) of the muscles of the tongue, methods of diagnosis are used that are realized by means of electrical measuring instruments [2, 3]. For example, the way of measuring tongue muscles pressure force is known, when two or more electric type force transducers are inserted into patient’s oral cavity [4]. In addition, the electromagnetic influence of the measuring instrument on the patient’s body is not excluded, which reduces the reliability of measurement results in the process of diagnosis. To improve the accuracy, reliability and safety of the diagnostic © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 152–159, 2023. https://doi.org/10.1007/978-3-031-35311-6_17
Influence of Physical and Mechanical Properties of Optical Fibers
153
system, the use of fiber-optic diagnostic tools for maxillofacial pathologies was proposed in [5, 6]. In particular, the fiber-optic sensor (FOS) of bending-type tongue pressure on the palate contains plates 2 and 3, between which there is an optical fiber (OF) 4, connected at one end to the radiation source, and at the other end to the radiation receiver (Fig. 1, a)) [5, 6]. The bottom of the upper plate 8 made in the form of an inverted cup deflects upwards under the action of the tongue pressure 9. The sensor is placed in the patient’s oral cavity so that the bottom 8 of the upper plate 2 was in contact with the patient’s palate 12 (Fig. 1, b).
Number 6
Spiral
Fig. 1. Fiber-optic sensor (FOS) of tongue pressure: (a) a fragment of the developed sensor design; (b) variants of the arrangement of the OF in the bending-type FOS of the tongue pressure on the palate [5].
The simplest scheme of arrangement of the OF 4 between plates 2 and 3, when the coils of the OF 4 are arranged in the form of the letter “O”, the beginning and the end of which exit into the slot 13 of the side surface of the upper plate 2. But such arrangement of the optical fiber may not provide the required sensitivity of conversion of the optical signal. To increase the sensitivity of the optical signal conversion there are several possible layouts of optical fibers: spiral, in the form of the letter “B”, in the form of the number 6, in the form of two mutually perpendicular letters “P” (see Fig. 1b). This or that scheme of optical fiber arrangement is determined by the peculiarities of the patient’s oral cavity.
154
T. I. Murashkina et al.
Tongue pressure on the palate of the bending type of optical fiber works as follows. The patient’s tongue 9 presses forcefully on the lower plate 3. It deflects towards the palate 12. At the same time the optical fibers 4 experience deformation. The light flux passing from the radiation source to the measurement zone changes its intensity under the deformation and, accordingly, the signal level at the radiation receiver changes. On the one hand, deformation of optical fibers is a source of measuring information about the measured pressure, but on the other hand it may cause failure of the whole measuring device due to possible breakage of optical fibers. It is known that the impact of loads negatively affects the technical and operational characteristics of FOS, as it can lead to mechanical destruction of its main and most “unreliable” functional element of fiber-optic cable (FOC). The main element of FOC is an optical fiber (OF). The material from which the fiber optic is made - glass - is brittle, unable to withstand torsion, bending, and stretching during operation. The main reason that leads to brittleness and the occurrence of further failures of optical fibers is the presence of microcracks inside and on the surface of the fiber [7, 8]. The occurrence of static fatigue and reduction of mechanical strength of optical fibers are affected by temperature changes, mechanical and chemical effects and normal aging. As a result of mechanical stresses on the fiber are micro- and macro-bends, causing losses of luminous flux up to 100 dB/km (Fig. 2).
Fig. 2. Sources of loss in optical fiber.
Microbends are caused by imperfections in the design of the fiber optic cable and the fiber optic itself. Microbends can be caused by manufacturing defects if the fiber core deviates from the axis, by mechanical deformations occurring during cable installation, and during operation of the fiber optic cable due to the influence of external factors. Macrobending implies bending the fiber with a radius greater than 2 mm. As a result of the action of loads on the cable, its geometrical parameters change, which leads to a change in its physical properties, first of all, refractive indexes of the shell and the core. This causes a change in the boundary conditions between the core and the shell and to the generation of radiative modes, which is equivalent to the modulation of optical radiation in intensity [9, 10].
Influence of Physical and Mechanical Properties of Optical Fibers
155
The reliability of optical fibers is largely determined by the strength parameters of the fiber (mechanical reliability) and the accuracy of the transmitted light signal (metrological reliability). One of the significant advantages of fiber optic cables is their potential durability glass products can last for hundreds of years. However, to ensure the long-term operation of FOSs it is necessary to eliminate the appearance of mechanical stresses, since the service life of FOSs is determined by the process of growth of microscopic cracks in them. The sources of appearance and growth of such cracks are always present on the glass surface, but they may not always develop, but the formed crack will grow in case the fiber is exposed to a tensile force. The dependence of the lifetime of an optical fiber on its tension, which is expressed in units of elongation, is shown in Fig. 3.
Fig. 3. Dependence of the lifetime of the OF on its elongation.
The graph is based on typical data of the Japanese company FUJIKURA for a standard single-mode telecommunication optic cable [11]. Despite the fact that the evaluation did not take into account a number of factors, for example, the influence of moisture, which also leads to a decrease in service life, the graph shows that even when the fiber tension is insignificant, its service life decreases. Therefore, the reliability of FOS can be assessed only with information about the fiber tension force. In this regard, there are several ranges of arising tensions: up to 0,3% - safe, more than 0,6% - unacceptable and ranges that require additional analysis. It can be seen that the service life of 15 years, set by the consumer of FOS, is provided when the elongation value is less than 0.37%, which determines the allowable value of local mechanical tensile load in the range of 3 H. Since optical fibers experience macrobends, it is necessary to determine the conditions under which optical fibers will not lose their metrological and operational reliability after a certain period of operation in medical institutions.
156
T. I. Murashkina et al.
Identification of regularities linking heterogeneity of physical and mechanical properties of optical fibers with their strength and metrological characteristics under operational conditions will make it possible to improve the reliability of measuring tongue pressure on the palate. The purpose of the work is to determine the conditions under which the bending-type palate pressure FOS will work reliably for at least 15 years. It is necessary to determine the minimum values of macrobending radii of optical fibers, at which the metrological and operational reliability of a bending type of tongue pressure palate FOS decreases.
2 Materials and Methods The methods of geometric optics, logarithmic differentiation, and relations for the photoelasticity effect were used in the studies.
3 Research Results The power of the light flux Pout at the output of the OF is determined by the expression: Pout = τL τR Pin
(1)
where τL , τR - coefficients of light flux loss in the fiber along the length of the fiber, depending on the bending radius R of the fiber, respectively; Pin - the power of the light flux at the input to the fiber. In accordance with [8]: τL = eχ L , τR = eαR R where χ - is the extinction coefficient of the core material of the OF; αR is the attenuation coefficient due to the bending of the 2 OF. For bends with large radius R >> r c [8]: αR = NA rc R, where r c is the core radius of the OF; NA is the numerical aperture of the OF. For a fiber with a stepped refractive . index profile: NA2 R r
Then Pout = eχ L c Pin . Using the method of logarithmic differentiation, let’s determine the change ΔP of the luminous flux power caused by changing the bending radius of the fiber by ΔR, taking into account that when bending the fiber its extension by ΔL and the numerical . If dPout = 0 we have aperture NA changes by dPout R NA2 = χ L − 2 NA − R dPin rc rc
(2)
The bending external force F acts on the fiber with a three-layer structure (Fig. 3). When the OF is deformed, the photoelasticity effect is manifested, which consists in the change of refractive indexes ni by the values of Δni . In the polar coordinate system, the change in the refractive index of the i-th layer Δni contributed by the photoelasticity effect is expressed by the well-known formula [9] (Fig. 4). ni = −
n2i (p11 + p12 )εri + p12 εzi , 2
(3)
Influence of Physical and Mechanical Properties of Optical Fibers
157
Fig. 4. Action of bending external forces F on optical fiber with three-layer structure.
where p11 , p12 are photoelasticity coefficients or Pockels coefficients of the material subjected to deformation; εxi , εyi , εri - are the relative deformations in the cross section of the i-th layer; εzi is the relative deformation of the i-th layer along the fiber axis. According to generalized Hooke’s law for the spatially stressed state of the OF, the relative deformations in the cross section εx , εy and along the Z axis εz are ⎧ 1 ⎨ εx = E σx − μ(σy + σz ), (4) ε = E1 σy − μ(σz + σx ), ⎩ y εx = E1 σz − μ(σx + σy ) , where E is the modulus of elasticity; μ - is the Poisson’s ratio. The values of mechanical stresses σ r , σ ϕ , σ z , expressed in the polar coordinate system, are related to the values of OF deformations εr , εϕ , εz by the following relations: ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ vi
i
i + 2vi εri σri ⎦ ⎣ ⎣ σϕi ⎦ = ⎣ i · εϕi ⎦ (5)
i + 2vi
i σz vi vi
i + 2vi εzi where i - is the layer index (i = 1 for the core, i = 2 for the shell, i = 3 for the outer coating); i, νi - are the Lamé constants related to the elastic modulus E i and Poisson’s coefficients μi by the following expressions i Ei
i = (1+μμ , i )(1−2μi ) (6) Ei vi = 2(1+μi ) . Inside the coaxial cylinders, which are the OF layers, the stress distribution is as follows: ⎧ ⎪ σ = Ai + Br 2i , ⎪ ⎨ ri i (7) σϕi = Ai − Br 2i , ⎪ ⎪ ⎩ σ =C,i zi
i
158
T. I. Murashkina et al.
where Ai , Bi , C i are constants whose values are determined; r i - is the radius of the cross section of the i-th layer. The stress inside the core of the OF is finite and, therefore, is defined at Bi = 0. In addition, for the OF having three layers, the constants A, B, C are determined from the following boundary conditions:
⎧
i=1,r=a = σri σ , ⎪ ri ⎪
i=2,r=a ⎪ ⎪
⎪ = u , u i i=1,r=a ri ⎪ ⎪
i=2,r=a ⎪ ⎪ ⎪ σri i=2,r=b = σri i=3,r=b , ⎪ ⎪
⎨ uri i=2,r=b = uri i=3,r=b , (8)
⎪ σri i=3,r=c = −F, ⎪ ⎪ ⎪ ⎪ a 2π b 2π c 2π ⎪ ⎪ ⎪ | | σ rdrd ϕ + σ rdrd ϕ + σzi |i=3 rdrd ϕ = 0, zi i=1 zi i=2 ⎪ ⎪ ⎪ a 0 b 0 ⎪ ⎩0 0 uzi |i=1 = uzi |i=2 = uzi |i=3 , where uri = εri dr- radial displacement of the i-th layer. The first two equations in system (8) mean that the radial stresses and displacements at the boundary are continuous. The fifth equation shows that the force F acts uniformly on the OF. The sixth equation shows that both ends are free. The seventh equation shows that the deformation at the ends is not taken into account because it is negligibly small compared to the surface deformation (the body is stretched along the x-axis and its dimensions along the x-axis are much larger than the transverse dimensions). As an example, we performed a mathematical simulation to determine the maximum possible bending of the OF in the FOS of the tongue pressure on the palate [5], in which the power of the light flux changes by no more than 20%. L is neglected due to the fact that this change is negligible. The sensor uses “quartz - quartz” OF ZLWF200/240HT//360/newH with parameters: NA = 0.22, r c = 100 μm, = 120 μm, r ov = 180 μm, Young’s modulus of fiber material E = 0,56·105 MPa, μ = 0.25. It is defined, that the maximal value of bending radius Rmin = 5,3 mm. At modeling it was meant that from the point of view of mechanical strength the minimum bending radius should meet the condition Rmin ≥ (10...20)rc [9], that is in our case Rmin >2 mm. At the same time from the point of view of mechanical strength the bending stresses arising in the OF should be 2…3 times less than acceptable [σacc ], . Calculated minimum value of bending radius Rmin is equal whence to 5,3 mm. For increase of reliability we accept the factor of safety equal to 1,5. Finally, in the developed FOSs of the tongue pressure on the palate, the minimum bending radius Rmin must be 8 mm for any variant of the location of the OF in the FOS.
4 Conclusion The calculated value of the minimum bending radius of the OF obtained on the basis of taking into account the physical and mechanical properties of the OF allows to design a perceptive element of the external aerial line pressure on the palate, ensuring its reliable functioning in the conditions of the medical institution for a period of 15 years.
Influence of Physical and Mechanical Properties of Optical Fibers
159
Acknowledgement. The study is carried out within the framework of a scientific project, with the financial support of the Russian Science Foundation No. 22-15-20069 and the Administration of the Krasnodar Territory, Kuban Science Foundation No. ONG-211-4/22.
References 1. Vinogradov, D.: Setting the Voice. Prosveshcheniye, Moscow (1997) 2. Urakova, E.V.: Morphofunctional assessment of language and its clinical significance. Kazan, Russia (1998) 3. Yamashev, I.G., Matveev, R.S.: Language: clinico-functional methods of diagnostics of pathological conditions. Chuvashia Institution of Higher Learning, Cheboksary, Russia (2012) 4. Patent RU 2623309 Method for measuring the force of tongue muscle pressure on segments of adjacent anatomical formations of the oral cavity. Republished 23 June 2017. http://www. findpatent.ru/patent/262/2623309.html. Accessed 20 Nov 2022 5. Badeeva, E.A., et al.: Patent for invention of Russian Federation 2741274 Fiber-optic sensor of tongue muscles force - tongue pressure on the palate and method of its assembly, 22 January 2021 6. Badeeva, E.A., Murashkina, T.I., Vasiliev, Yu.A., Khasanshina, N.A., Slavkin, I.E.: Fiberoptic sensors of bending-type tongue pressure on the palate. In: Proceedings of the XXXIII All-Russian Scientific-Technical Conference of Students, Young Scientists and Specialists “Biotechnical, Medical and Environmental Systems, Measuring Devices and Robotic Systems” (Biomedsystems-2020). Ryazan, Russia (2020) 7. GOST 52266-2006 Cable products. Optical cables. General specifications, Moscow, Russia (2006) 8. Donald, J.: Sterling Technical Guide to Fiber Optics. Lori Publishing House, Moscow (1998) 9. Okosi, T., Okamoto, K., Otsu, M., Nishihara, H., Kyuma, K., Hatate, K.: Fiber Optic Sensors. Enegroatomizdat, Leningrad Branch, Leningrad, Russia (1991) 10. Shumkova, D.B., Levchenko, A.E.: Special Fiber Optics. Publishing House of Perm National Research Polytechnic University, Perm (2011) 11. Laferriere, J., Lietaert, G., Taws, R., Wolszczak, S.: Reference Guide to Fiber Optic Testing. JDS Uniphase Corporation (2011)
A Detailed Study on the 8 Queens Problem Based on Algorithmic Approaches Implemented in PASCAL Programming Language Serafeim A. Triantafyllou(B) Greek Ministry of Education and Religious Affairs, Athens, Greece [email protected]
Abstract. The eight queens problem was a topic for concern of many great mathematicians such as Gauss, Nauck, Guenther and Glaisher among others. It is a problem that explores how to set eight chess queens on an 8 × 8 chessboard by having in mind to implement the constraint that no two queens can attack one another. Therefore, a solution can be achieved in that problem in case that no two queens possess at the same time the same column, row, or diagonal. The eight queens problem was first posed in the mid-19th century and nowadays, it is used in many cases for numerous computer programming techniques such as secure data hiding. This study tries to contribute to a better understanding of the 8-Queens problem by describing and implementing various algorithmic approaches implemented in PASCAL Programming Language, with main purpose to find optimal solutions to the problem. In the last section, a useful discussion is conducted for further investigation of the A* algorithm to offer better solution to Queens problem in different dimensions (n × n). Keywords: 8-Queens Problem · Algorithms · PASCAL
1 Introduction The eight queens problem was first proposed by Max Bezzel in the Berliner Schachzeitung (1848) and the first who proposed a full solution was Franz Nauck in Leipziger Illustrierte Zeitung (1850) [6]. The eight queens problem was a topic for concern of many great mathematicians such as Gauss, Nauck, Guenther and Glaisher among others. It is a problem that explores how to set eight chess queens on an 8 × 8 chessboard by having in mind to implement the constraint that no two queens can attack one another. Therefore, a solution can be achieved in case that no two queens possess at the same time the same column, row, or diagonal. The eight queens problem was first posed in the mid-19th century and nowadays, it is used in many cases for numerous computer programming techniques such as secure data hiding [1, 3, 7–9]. There are proved to be 92 solutions in this problem, but many of them are equivalent because each solution can come from the other solution according to the basic symmetry or doing 90-degree rotations [4, 5]. Therefore, from the 92 solutions we come up to 12 discrete solutions (see Fig. 1). A first solution to the problem is presented in Fig. 2. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 160–173, 2023. https://doi.org/10.1007/978-3-031-35311-6_18
A Detailed Study on the 8 Queens Problem
Fig. 1. The 12 discrete solutions to the 8-Queens problem
161
162
S. A. Triantafyllou
Fig. 2. A first solution of the 8 Queens Problem
2 Algorithmic Approaches to Solve the Queens Problem The decision to implement the algorithmic approaches in Pascal programming language was taken after careful consideration, because Pascal is an imperative and procedural programming language, intended to encourage good programming practices using structured programming and data structuring. 2.1 Backtracking Algorithm Our first attempt to examine the Backtracking Algorithm [13] is to try at first to solve the 4 × 4 Queens problem. It is observable that for n = 1, the problem has a trivial solution, and there is no solution for n = 2 and n = 3. We have decided to place the 4 queens named as q1, q2, q3 and q4 into a 4 × 4 chessboard in a way that no two queens attack each other. Next, we placed queen q1 in the first acceptable position (1,1). A possible next position to place q2 is position (2,3), but in that case there is no position left for placing queen q3. Therefore, we backtrack one step and place the queen q2 to position (2,4), the next best option. Next, the position to place the queen q3 is (3,2). However, by continuing we understand that this strategy leads to a deadlock, and no position is left to place securely queen q4. Therefore, we have to backtrack until queen q1 is placed to position (1,2), queen q2 to position (2,4), queen q3 to position (3,1) and queen q4 to position (4,3). Finally, we come up to the solution (2, 4, 1, 3). This is a first possible solution to the 4-queens problem. By the same way the presented method is repeated for all the other available solutions that exist. The implicit tree for the 4-queens problem for a solution (2, 4, 1, 3) is as follows (see Fig. 3):
A Detailed Study on the 8 Queens Problem
163
Fig. 3. The implicit tree for 4-queen problem for a solution (2, 4, 1, 3)
Next, we try to solve the 8-Queens problem with the backtracking method. According to that method, after placing the queen to a specific position, it is next checked if there is an acceptable position to place the next queen. If the next acceptable position is found, the algorithm handles the placement of the next queen. In other case, the algorithm backtracks to the last accepted position where the queen was placed. This procedure is presented with the following code:
164
S. A. Triantafyllou
procedure Queens3 (n: integer); var i,k: integer; begin X[1] := 0; k: = 1; While k > 0 do begin X[k] := X[k] + 1; while (X[k] ; Then ambi is used elsewhere in the program. 4.4 The Function for Calculating Ion-Free Pseudodistance CalcIONO_L1L2(); Finally, let us describe the function for calculating the ion-free pseudodistance. This function uses the entire algorithm described in the Sect. 2. Method Description. The input parameters of the function are: SattChan &chan - data structure on the specific satellite;
190
V. L. Tryapitsyn et al.
TSatType SatType - type of the navigation system; int nSat0 - number of satellite; Signs of the presence of L1 and L2 measurements - bool haveL1, bool haveL2; Let us describe the algorithm of this function. If there are signs of presence of L1 and L2 , the ion free pseudodistance is found as: chan.C_Iono_Free = (chan.C2 - gamma * chan.C) / (1 - gamma); i.e. by a formula that takes into account L1 and L2 . If there is L1 , but no L2 and enough time has passed for the accumulation of a valid correction value, then. chan.C_Iono_Free = chan.C - AccIonL1[SatType].korIono[nSat0]; If there is L2 , but no L1 , then. chan.C_Iono_Free = chan.C2 - AccIonL2[SatType].korIono[nSat0]; where: C_Iono_Free - the resulting corrected value of pseudorange without the influence of the ionosphere; korIono - arrays of correction values for each satellite for L1 and respectively L2 ; AccIonL1 - L1 (L2 ) array including korIono of all navigation systems; gamma is the coefficient of two-frequency measurements (3); In addition, the program in the general part of the algorithm, working on two frequencies, the function analizeAndCorrectSleeps() participates. It analyzes phase cycle losses (slips). This algorithm uses tracking pseudorange and phase of both frequencies, finds their linear combinations and decides on the presence of a slip by analyzing the ratio of these combinations. Then, the phase values are corrected by the number of found slips.
5 Conclusion This paper presented a method for calculating ionospheric corrections in GNSS radio navigation measurements using pre-calculated corrections that take into account twofrequency methods and its application to single-frequency measurements. The practical implementation of the method and the structure of the program are given. Experimental data are presented and their analysis is shown. This method has shown good accuracy and can be applied in situations when the measurements on one of the frequencies L1 or L2 in the navigation satellite systems GPS or Galileo are missing.
References 1. Antonovich, K.M.: The Use of Satellite Radio Navigation Systems in Geodesy. vol.1. Federal State Unitary Enterprise “Kartgeocenter”, Moscow, Russia (2005) 2. Advanced Multi-Constellation EGNSS Augmentation and Monitoring Network and its Application in Precision Agriculture D5.2.Version1.0. GNSS Network Processing Software Implementation and Test. p. 30 3. Tryapitsyn, V.L., Dubinko,T.U.: Development and implementation of a method for transferring the parameters of ionospheric. Models for building a map of the electron concentration of the ionosphere in real time. In: AIP Conference Proceedings, vol. 2402, p. 020055 (2021)
Method for Calculating Ionospheric Corrections
191
4. Kumar Dabbakuti, J.R.K., Venkata Ratnam, D., Sunda, S.: Modelling of ionospheric time delays based on adjusted spherical harmonic analysis. Department of ECE, KLEF, K L University, Vaddeswaram 5. Doubinko, Y.S., Doubinko, T., Doubinko, L.V., Nikitin, O.V.: Elimination of the influence of tropospheric and ionospheric measurement errors in single-frequency receiver. Navigation and Hydrography 15, 33–38 (2002) 6. Kurnosov, A.S., Fateev, Yu.L.: Method for Determining the Delay of Navigation Satellite System Signals in the Ionosphere, https://findpatent.ru/patent/258/2584243.html, (Accessed Nov 11 2022) 7. Melbourne, W.G.: The case for ranging in GPS based geodetic systems. In: Goad (ed.) 1st International Symposium Precise Positioning with the Global Positioning System, pp. 373–386. US Department of Commerce, Rockville, Maryland (1985)
Information Management of Real Estate Roman V. Volkov(B) Federal State Autonomous Educational Institution of Higher Education Russian University of Transport, Obraztsova Street 9, Building 9, GSP-4, Moscow 127994, Russia [email protected]
Abstract. The article grounds the introduction of a new concept of “information management of real estate”. The aim of the work is to study the information management of real estate as a specific type of management. Within the framework of organizational management for real estate, information models and information technology are also used. Under certain conditions, such management becomes informational as an alternative to organizational management. The particularity of real estate as an object of management is that it is connected with land plots. Information management of real estate is connected with information management of land plots. The distinction of real estate information management from other types of management is the use of geoinformatics and geodata methods. The levels of information management of real estate are described. The information models used in real estate management are also described. The necessity of using geoinformatics and geodata in real estate management is shown. Information management of real estate applies information units and information constructs. Information management of real estate needs to create an information field and information space at different scales. The connection between the information field and the information space is shown. #CSOC1120. Keywords: Information · Management · Real Estate Management · Information Management · Real Estate Objects · Information Space · Information Units · Information Constructions
1 Introduction Real estate as an object of management has peculiarities [1]. The main feature is that real estate objects are associated with land plots. According to this criterion, management can be complete if the system “real estate object and land plot” is considered as a management object. Management can be incomplete if the real estate object, detached from the land plot on which it is located, is considered. Management can be organizational [2] and informational. Information models and information technologies are used for information management of real estate. Under certain conditions, such management becomes informational as an alternative to organizational management. Informational management of real estate is connected with informational management of land. The difference between informational management of real estate and other types of management is the use of geoinformatics and geodata methods. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 192–199, 2023. https://doi.org/10.1007/978-3-031-35311-6_21
Information Management of Real Estate
193
Land is the fourth major factor of production, along with natural resources, labor, and capital. The availability of the land surface is a prerequisite for life. Production is located on a particular piece of land. Fishing, hunting, farming and trade, as well as obtaining energy and clean water, all involve the use of land and a particular area. The growth of the Earth’s population has always necessitated the imposition of various land-use regulations. It has been necessary to determine who owns what piece of land and where its boundaries lie. This is one of the motives for the management of land and real estate on it. Real estate management is understood as a management activity in the field of real estate, which includes setting goals, planning to achieve them, the organization, execution and control of the solution of production tasks, as well as subsequent analysis of the goals and the chosen path to achieve them. This management activity is multifaceted and complex. In the course of real estate management, goals and objectives of owners, users, service companies and other persons connected with real estate are coordinated. Real estate management in Russia is an urgent task and applies different management technologies. One of these technologies is information management. Information management is used in different areas. It is used in the management of industrial organizations [3] and higher educational institutions [4]. It is used in the transport sector [5]. It is used in many areas. That is why there is a reason to introduce the concept of “information management of real estate”.
2 Some Particularities of Real Estate Management In a market economy, real estate management is subject to the law of supply and demand. The market determines the level of rental rates and prices of services offered. Demand is limited by the consumers’ ability to pay and supply by the level of development of the national economy. In an economic boom, the demand for both residential and industrial space rises sharply. Since new construction can increase the existing building stock by a maximum of 3% per year, prices and rents begin to rise. In turn, as the economy recedes, real estate prices also decrease dramatically. In addition to market conditions, an important factor affecting real estate management is legislation and the activities of official bodies in the real estate field. In the Soviet Union the economy was based on planned production and distribution. Residential, educational, and work spaces were determined by normative values of space. The prices of consumer goods did not correspond to the costs of their production. For example, residents were hardly charged for energy or capital costs. Economic laws did not govern the real estate sector, and this old tradition is still a nuisance in Russia, although market pricing of consumer goods has improved the efficiency of the real estate sector to some extent. Real estate management is carried out by specialists who use information and geoinformation systems to support them. Real estate management specialists include users who make decisions in real estate, their representatives who have decision-making authority, decision-making property owners and their representatives, and managers of businesses that offer real estate services. In real estate management, modern technologies, organizational models, and legislative norms regulating it must be applied. The correspondence of the real estate fund with
194
R. V. Volkov
the main activity taking place in it is clarified and confirmed by the POE (Post Occupancy Evaluation) methodology. In doing so, the interaction between the property manager and the managers responsible for the core business becomes particularly important. Property managers must constantly monitor the models and forms of work applied by industry leaders. As a strategic activity, property managers should keep track of other areas of the economy and anticipate both their future developments and the resulting changes in their own work - development in the real estate sector in different countries, changes in the quantity, quality and location of real estate, changes in ownership and rental rates, changes in prices for related services, etc. The economic management of real estate assumes a good organization of accounting and estimating. Accurate information is needed about the costs caused by the object as a whole, as well as by its individual rooms, industries, and types of products. Thus, the need for major repairs will not be unexpected, if the state of the object is constantly monitored, and preventive maintenance and current repairs, the need for which is reflected in the financial planning, among other things. This task is solved with the use of special information systems and software. The legal management of real estate is based on regulations and legislation. On the basis of these documents, contractual relationships are established, the terms of which need to be legally developed and monitored. It is difficult in conditions of changing normative documentation, so in this part the support with the help of reference information systems is also necessary. Thus, support for real estate management requires the use of two support systems. The first support system is formed by information systems and technologies. The second support system is formed by geographic information systems and geographic information technologies. When these systems become dominant, information management emerges.
3 The Content of Information Management Information management emerged on the basis of the integration of information technologies, management technologies [5] and information models of management objects. Information management uses the phenomenon of information field [6, 7] as a content factor and source of information. Information management uses information space [8] as a coordinating factor. The fundamental difference between information management and organizational management is the use of models of information units [9]. The fundamental difference between real estate information management and other types of management is the use of geoinformatics and geodata methods [10]. According to this feature, information management of real estate belongs to the field of spatial economics [11, 12]. The information field integrates and unifies information models and simplifies their use and management. The prototype of the information field is the digital space. Digital assets are models of the information field. The objects of the information field are also metamodels [13] as a tool for generalizing experience and creating theories. In the aspect of the development of management schools, information management refers to situational or contingency management. The situation in which the object of management is located can be considered as an information situation [14]. The aggregate of the
Information Management of Real Estate
195
connected real estate object and land plot forms an information situation. The model of information situation is used and characterizes information management. Information management technologies are used in different fields, but there are distinctive principles and models that characterize information management. The first principle is the principle of information field and information space. The information field is the content of the information space. Information space [8] is a shell, more often a coordinate space, in which the information field is embedded. As an example, the nearEarth space, which contains electric, magnetic and gravitational field of the Earth [15]. There are discrete fields in near-Earth space: three rings of space debris and a radiation belt. The near-Earth space cannot be equated with the near-Earth field. The field, as a rule, has a form and is described by a function or several functions. Coordinate space is defined by a system of coordinates and has no form. The elements of the information field and information management are information units. The objects of the information field are information models, information situations, information constructions. Information relations exist between objects and models. The use of information relations is a distinctive feature of information management. In real space there are spatial relations, which are transformed into information relations during information management. A distinguishing feature of organizational and informational management is the informational basis used in management. Organizational management uses information. In information management, not information, but information models and information resources are used.
4 Particularities of Information Management of Real Estate The implementation of real estate information management is connected with the study of the information situation in which the real estate object and land plot are located. Such management is associated with the construction of scenarios for the development of the information situation. The scenario is the information construction of the dynamics of the information management situation. The totality of states of the real estate object is called a trajectory of states. Management consists in setting a target trajectory of states and correcting the actual trajectory of states if it differs from the target trajectory. There are different models and components in information management of real estate. These components are complementary [16] and reflect complementary aspects of management. Information technologies reflect the technological aspect of management. Information systems reflect the system and technological aspects of management. Information concepts reflect conceptual, theoretical and methodological aspects of management. Methods set the basis for the application of technology. The paradigm of information management of real estate can be given by analogy with the technology of spatial monitoring [17]: “observation; analysis; prediction; implementation”. Information management contains groups of technologies that include: collecting information, maintaining databases; communications, facilities management; personnel management, accumulation of experience and formation of knowledge. In information management of real estate we can discuss the stereotypical management. The basis of this management is the patterns of informational spatial situations
196
R. V. Volkov
and the corresponding stereotypical solutions for these situations. The importance of information management of real estate is the allocation of key management indicators. The method of key indicators can be considered a kind of management method with the help of a model, the parameters of which are only key indicators. For informational management of real estate, such indicators should be informationally determined indicators. Other indicators are obtained by calculation or modeling. Information-defined indicators are indicators, the value of which is explicitly determined on the basis of measurements or information collection. In the information management of real estate there is a type of oriented management, aimed at the target management of the real estate object. Such management is problemoriented. Informational management of real estate makes it possible to integrate different models and methods into a single environment. This is its advantage. This advantage creates the property of integrativity of the real estate object model. The property of integrativity, along with the property of emergence, is a system property of modern complex management systems and is their qualitative difference from simple management systems. When integrating data technologies or systems, such management is called “integrated information management”.
5 Levels of Information Management of Real Estate Every management is multi-level. The most striking example is hierarchical management [18]. Levels of management exist even for network management. This defines networkcentric management. When analyzing levels of management, it is necessary to distinguish between paradigmatic and syntagmatic relations. Paradigmatic relations exist between the levels. For information management of real estate there are: conceptual, strategic, tactical and operational levels. The conceptual level of information management includes the use of concepts, the use of conceptual models, and the theoretical foundations of the organization of management. This level is characterized by a cognitive visual approach, which is expressed in the use of terms such as "view of the management situation", "point of view", "snapshot of the situation", "image of the management object", etc. This stage of information management is infological and precedes the construction of management models. It serves as the basis for qualitative and comparative analysis and the construction of qualitative models. At this stage a conceptual description of objects, relations and connections is formed. This level can be called the level of conceptual construction. It is common to many real estate management systems. The strategic level of management includes the formation of the strategic goal and objectives of real estate management. On the component of this strategy depends on the further implementation of management and the trajectory of the states of the object of management. At this level, the management objectives are formed in generalized or abstract indicators. Tactical level of management includes refinement of the strategic objective and task of real estate management taking into account the peculiarities of the economic situation and the spatial situation in which the real estate object is located. The strategic task
Information Management of Real Estate
197
is converted into a tactical one. At this level, simulation modeling and forecasting are carried out. All abstract parameters in models are replaced by actual parameters. Simulation modeling and forecasting uses time series. Therefore, at this level, time parameters become important. At this level, the limits of acceptable variation of states of real estate objects are estimated. At this level, the stability of the state of the real estate object is determined. The operational level of information management of real estate includes the transition to operational models. At this level, the dynamics of the economic situation and the spatial situation in which the real estate object is located are taken into account. At this level, the real parameters of information management flows are evaluated and determined. At this level, the information load on the management system is evaluated. In general, in the information management of real estate different levels of management are interconnected and form a complex management system that implements complex tasks of information management of real estate.
6 Information Models in Real Estate Management Information models in real estate management are based on information units. These information units can be spatial, such as parts of the building. These information units can be economic, such as parts of a transaction or payments. The formation of information models in real estate management is determined by the objectives of management. The main objectives of information management of real estate in the aspect of the application of models include: • Application of real estate information models to improve business operations; • Application of information models to gain a competitive advantage; • Application of computer processing of information to improve the efficiency of real estate operation; • Application of models that reduce the cost of operation; • Application of models that increase profits. The main model components of information management of real estate are: data models as a result of observation of the real estate object, the model of the economic situation in which the real estate object is, the model of the environmental situation in which the real estate object is, spatial geoinformation model of the real estate object, models of management goals and objectives, spatial relations of the real estate object with the environment, information market relations, conjuncture, etc. Information resources in real estate management systems serve as the basis for its effective operation, sustainable state of the real estate object, increasing its life cycle. Only information management and information models allow to increase the life cycle of the object of management. Organizations with real estate are influenced by competitors and the external environment. To counteract, they need different types of information resources and information. This forms their information needs. Information need [19] in obtaining information or information services to maintain life and development. Information needs are dynamic, new ones arise on the basis of satisfied ones. There is a law of outstripping growth of needs growth of opportunities.
198
R. V. Volkov
Information models as a resource become the basis of management and real estate object. At the same time they independently become the object of property and sale. Information models are means of accumulation of experience, reception of new knowledge and achievement of competitive advantage. Information models become the basis of innovative activity and the basis for increasing the efficiency of innovation [20]. The multidimensional value of information models determines their importance for management and production. For models there is a concept of levels and nesting. The integral information model is used not only in the information field, but also in the field of economic relations. For models there is a concept of links and relations, as well as the correspondence of the model and the modeling object. Relationships in an information model are dynamic. This allows, by setting some parameters, to change others ones. Information models, unlike technological and technical models, are easily modified. This ensures the adaptability of their use and complementarity when introducing new models into management or operation. The information model depicts a management object or group of objects. This ensures its adaptation to corporate real estate management. The information model is a broader concept than the formalized information. It can be used in the presence of formalized and non-formalized information.
7 Conclusion Information management of real estate transforms organizational management into the field of information field. It creates the possibility of using a variety of information models and the integration of these models. Information management of real estate precedes and serves as a basis for intelligent management [21]. The peculiarity of information management of real estate is the use of cognitive space and cognitive models of management support. The application of information methods in real estate management is of many kinds. Information management of real estate allows the integration of resources and technologies. It increases the speed of analysis and management. Information management of real estate enables optimization of resources, including discrete optimization methods. Information management of real estate allows the use of meta-heuristics methods to solve complex problems.
References 1. Grabovyy, P.G., Azhimov, T.Z., Volkov, R.V.: The economic feasibility analysis of innovative megaprojects for the benefit of construction enterprises: the case of the republic of Tatarstan. Real Estate Econ. Manag. 4, 13–22 (2021) 2. Donaldson, L., Qiu, J., Luo, B.N.: For rigour in organizational management theory research. J. Manage. Stud. 50(1), 153–172 (2013) 3. Kornakov, A.N.: Information management in an industrial organization. Perspect. Sci. Educ. 2, 205–209 (2014) 4. Tsvetkov, V.Y.: System management. The state councellor 1(25), 57–64 (2019) 5. Polyakov, A.A., Tsvetkov, V.Y.: Information technologies in management. Moscow State University, Moscow, Russia (2007) 6. Tsvetkov, V.Y.: Information field. Life Sci. J. 11(5), 551–554 (2014)
Information Management of Real Estate
199
7. Ozhereleva, T.A.: Regard to the concept of information space, information field, information environment and semantic environment. Int. J. Appli. Fundam. Res. 10, 21–24 (2014) 8. Tsvetkov, V.Y.: Information space, information field, information environment. Europ. Res. 8–1(80), 1416–1422 (2014) 9. Tsvetkov, V.Y.: Information objects and information units. Europ. J. Nat. History 2, 99 (2009) 10. Zuo, R.: Geodata science-based mineral prospectivity mapping: a review. Nat. Resour. Res. 29(6), 3415–3424 (2020) 11. Davis, D.R., Dingel, J.I.: A spatial knowledge economy. Am. Econ. Rev. 109(1), 153–170 (2019) 12. Tsvetkov, V.Y.: Spatial Relations Economy. Europ. J. Econ. Stud. 1(3), 57–60 (2013) 13. Tsvetkov, V.Y., et al.: Metamodelling in the information field. Amazonia Investiga 9(25), 395–402 (2020) 14. Ozhereleva, T.A.: Information situation as a management tool. Slavic Forum 4(14), 176–181 (2016) 15. Barmin, I.V., Kulagin, V.P., Savinykh, V.P., Tsvetkov, V.Y.: Near_earth space as an object of global monitoring. Solar Syst. Res. 48(7), 531–535 (2014). https://doi.org/10.1134/S00380 9461407003X/ 16. Bogoutdinov, B.B., Tsvetkov, V.Ya.: Application of the model of complementary resources in investment activity. Mordovia Univ. Bull. 24(4), 103–116 (2014) 17. Chui, K.T., Vasant, P., Liu, R.W.: Smart city is a safe city: information and communication technology–enhanced urban space monitoring and surveillance systems: the promise and limitations. In: Smart Cities: Issues and Challenges, pp. 111–124. Elsevier, Netherlands (2019) 18. Dos Santos, P.H., et al.: The analytic hierarchy process supporting decision making for sustainable development: an overview of applications. J. Clean. Prod. 212, 119–138 (2019) 19. Rozenberg, I.N.: Information revolution and information needs. Dist. Virtual Learn. 4, 5–12 (2017) 20. Tsvetkov, V.Ya.: Conceptual model of the innovative projects efficiency estimation. Europ. J. Econ. Stud. 1(1), 45–50 (2012) 21. Safiullin, R., et al.: Methodical approaches for creation of intelligent management information systems by means of energy resources of technical facilities. In: E3S Web of Conferences, vol. 140, p. 10008 (2019)
A Complex Method for Recognizing Car Numbers with Preliminary Hashing Sergei Ivanov(B)
, Igor Anantchenko , Tatiana Zudilova , Nikita Osipov , and Irina Osetrova
ITMO University, 49 Kronverksky Pr., Saint Petersburg, Russian Federation [email protected]
Abstract. In the study, the authors consider neural network-based classification methods for the actual task of recognizing photo car numbers from a camera. Various types of neural network are considered: "fully connected" neural net-work, “convolutional” and “recurrent” neural network. A new method is pro-posed, which consists in pre-processing photos using hashing. The hash calculation for the car number photo is based on the polynomial hashing of the matrix. The results of the experiment carried out on three data sets of 200, 500 and 800 photos of car numbers from the camera show an equal high recognition accuracy, and a higher learning and calculation speed for the proposed method with preprocessing. Collision resolution during hashing is carried out by a fast method of open addressing. Calculations and verification were carried out using a mathematical package of symbolic calculations using a functional programming language. Keywords: Classification Method · Recognizing Photo · Pre-Processing Photo · Polynomial Hashing
1 Introduction In modern science and industrial applications, various types of neural networks are widely used to solve problems such as classification, prediction, prediction, recognition, computer vision, and natural language processing. The paper considers the actual problem of recognizing photo numbers of cars from a camera. To solve the problem, various types of neural networks are considered: a “fully connected” neural network, a “convolutional” and a “recurrent” neural network. Let us consider in more detail the types of neural networks and their application in published scientific studies. The most significant network in the field of deep learning is the Convolutional Neural Network (CNN). Convolutional Neural Networks are based on convolution and subsampling operations (pooling, max pooling). Convolution represents the process of applying a filter to each region of the input photo. The filter represents a matrix for transforming the input data in blocks that are smaller in size than the input data. The subsampling operation represents the process of compressing a photo by adding pixel block values. The process of reducing the size of a photo is called downsampling. The result will be a smaller photo than the original input photo. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 200–208, 2023. https://doi.org/10.1007/978-3-031-35311-6_22
A Complex Method for Recognizing Car Numbers with Preliminary Hashing
201
Recurrent neural networks (RNN) based on recurrent layers are used to model sequential data. The RNN layer uses a for loop to iterate through an ordered sequence, keeping information about the steps. For a Fully-connected neural networks (FCNN), photo processing requires a large number of neurons in the input layer, proportional to the number of all pixels in the photo. In the FCNN network, each neuron of one layer is connected to each neuron of the next and previous layer. In [1], the theoretical foundations of nonlinear machine learning and its practical application are presented, algorithms for deep neural networks are presented, and problems of machine learning are discussed. Methods for using qualitative data in neural networks are presented in [2]. New methods of working with categorical data in neural networks are presented. Embedding, coding, representation methods are presented for comparing qualitative values with numerical values of input data for a neural network. Recommendations for the use of methods for preparing qualitative data for use with neural networks are given. The paper presents the most effective methods for working with categorical data in neural networks. The work [3] presents an overview of convolutional neural networks, advanced CNN models, considers one-dimensional, two-dimensional and multidimensional convolution, and provides rules for selecting functions and hyperparameters. The authors of [4] proposed a method for imposing filter orthogonality on a convolutional layer based on the double block Toeplitz matrix representation of the convolution kernel for photo classification problems. The presented orthogonal convolution is optimal in terms of computing resources and additional parameters. In [5], graph neural networks (GNN) are considered, their application to various problems is analyzed, and comparison with dynamic programming methods is carried out. The work [6] is devoted to the problems of photo recognition. In the work, various types of neural architecture are investigated, the concept of a stochastic network and models of randomly connected graphs for networks are defined. In [7], the authors solve the problem of photo processing using a convolutional neural network. To improve the speed of convergence and recognition accuracy of a convolutional neural network, a new algorithm is presented with the addition of a recurrent neural network. Based on the experiment, the optimal parameters of the convolutional neural network are determined, taking into account various features of the photo. An advantage for convolutional neural networks is the ability to extract features from data in a hierarchical manner. In [8], a new type of convolutional layer is proposed quantum convolution. Quantum convolution applies random quantum transformations that produce meaningful features for classification purposes. Such a method is empirically evaluated in terms of the accuracy and speed of learning compared to classical CNNs. The work of the authors [9] is devoted to the problems of photo classification and recognition. This article provides an overview of self-supervised learning in computer vision tasks using unlabeled data. Schemes of effective learning, the process of supervised learning and self-supervised learning are given. The visual features of photos are studied using self-observation approaches. The work [10] is devoted to the applied problem of photo recognition of forest fires. Traditional photo processing technology and
202
S. Ivanov et al.
convolutional neural network are used together. The work [11] is devoted to the problems of computer vision. With the help of convolutional neural networks, we solve the problem of analyzing foam photos to estimate the concentrations of mineral substances. In [12], an ANN-based photo recognition and search method is presented, which gives an accurate photo index with a minimum MSE and a maximum PSNR. The authors of the article [13] devoted their work to the analysis of deep neural networks and modern architectures of neural networks presented in the iNNvestigate library. modern applications, various types of hash functions are widely used in tasks related to information security. Hash functions are designed to create “fingerprints” for various objects. The well-known hash function Adler-32, developed by Mark Adler as a modification of the Fletcher checksum. Cyclic redundancy check (CRC) is widely used to find the checksum when checking data integrity. Known cryptographic hash function developed by Ronald Rivest MD2 (Message Digest 2). On its basis, the MD4 (Message Digest 4) cryptographic hash function for the Microsoft CHAP authentication protocol was developed. To check the integrity of information and store password hashes, the MD5 (Message Digest 5) hash function has been developed. The RIPEMD-160 (RACE Integrity Primitives Evaluation Message Digest) cryptographic hash function developed by Hans Dobbertin, Antoon Bosselaers generates a 160-bit hash value. The Secure Hash Algorithm 1 (SHA-1) cryptographic hashing algorithm generates a 160-bit (20 bytes) hash value. The Secure Hash Algorithm 2 (SHA-2) family of cryptographic hashing algorithms includes the SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/256, and SHA-512/224 algorithms. For SHA-2, hash functions are based on the Merkle-Damgor structure. The paper [14] solves the problem of processing streaming photos using contentbased photo extraction (CBIR) methods. The paper proposes a method for creating a hash code of a given length using a modified U-net model and with photo preprocessing by randomly removing 30% of pixels. The problem of remote sensing photo search (RSIR) is discussed in [15]. A new deep hashing method, Asymmetric Hash Code Learning (AHCL), for RSIR has been proposed. The method includes a deep neural network to extract semantic features and hashing to match features with binary codes. The work [16] is devoted to the use of hashing for approximate nearest neighbor search (ANN). This paper proposes a deep hashing model for remote sensing photo retrieval (RSIR) in the framework of generative adversarial networks (GANs). The experiment performed shows improved performance. The work [17] is devoted to the solution of the problem of real-time storage, indexing and search of medical photos. A Convolutional Neural Network (CNN) is applied to automatically extract features from a photo and generates efficient and least parameterized hash codes using the features. This paper presents an efficient feature reduction algorithm whose performance has been tested on NEMA MRI and NEMA CT datasets. In [18], the problem of fine-grained photo search is solved by ranking photos that reflect the concept of interests based on query refinement. The paper presents a hashing network (A2-Net) for generating hash codes given attributes. The A2-Net has set feature decorrelation constraints on attribute vectors to improve their representational capabilities. The conducted experiments on data sets have shown the effectiveness of the network. To improve query speed and search accuracy, the work [19] presents an adaptive and asymmetric residual hashing (AASH) algorithm based
A Complex Method for Recognizing Car Numbers with Preliminary Hashing
203
on residual hashing, integrated learning, and asymmetric pairwise loss. Experiments on datasets have shown AASH to perform better than most symmetric and asymmetric deep hashing algorithms. The convergence, time and efficiency estimates were made for the algorithm. In [20], the authors improve the Neural ODE by introducing a special numerical method scheme. The developed new implemented scheme of the numerical method has high accuracy and optimal computational complexity. Experiments have shown high classification accuracy when using neural differential equations with an embedded scheme of the numerical method. In the considered scientific papers, the presented methods are designed to solve a certain class of problems. The aim of our work is to develop a method based on photo preprocessing using hashing to increase the speed of learning and computation of a neural network. The second section of the article discusses the preprocessing approach using polynomial matrix hashing, the third section presents the results of a computational experiment.
2 Pre-processing Using Hashing This section discusses the pre-processing method using hashing. For a computational experiment, we took a data set with a photo of car numbers. Figure 1 shows an example of a photo of a car number for recognition. To improve the quality of a photo of a car number from a camera, a noise reduction and clarity filter is applied.
Fig. 1. Photo of a car number for recognition.
First, we calculate the hash code for a normalized 300x80 pixel black and white photo for the car number from the camera. When calculating the hash code, we apply the polynomial hashing of the matrix (Fig. 2).
Fig. 2. Matrix for car number photo.
The hash is calculated based on the formula: y1
h(Mx1≤i≤x2,y1≤j≤y2 ) = p1x1 p2
x2 x2
j
Mij p1i p2
(1)
i=x1 j=x2
This choice is determined by the optimal complexity and speed of hash calculation. However, it is possible to apply other hash code types.
204
S. Ivanov et al. Table 1. Hash codes for different types.
Hash code types
Hash codes
Adler 32-bit cyclic redundancy check
2649845502
32-bit cyclic redundancy check
3867207170
128-bit MD2 code
335800711421620226100446798012868619925
128-bit MD4 code
20033466820680637881478315351402981589
128-bit MD5 code
314614760400110206663780785411205893027
160-bit RIPEMD code
96400197295433305360621461646603167229542726900
RIPEMD-160 70437779089821957479120849640675272665643369758 following SHA-256 160-bit SHA-1 code
412635802415280937906478869730433324001634409676
256-bit SHA code
54030279613164842341900879279864972748818170639771602294264399282186089101293
double SHA-256 code
76699931055268423067178466526966514972240867696775710255019985269314709255916
384-bit SHA code
332396670854298708472817716248457176169142926434107095999946 48960379509892366283752192890271040476425113526354654143
512-bit SHA code
1327509053418905345504595460527833924570203283389714381925623441014401828958 2985384988899186961062299559291393299965130651834082258947725503487712508916985
code based on 3512333525000953187 polynomial hashing
Table 1 shows the calculated hash of the code for various types for a black-and-white photo of 300x80 pixels in size of the car number from the camera. Collision resolution during hashing is carried out by a fast method of open addressing. In the next step, we use Fully-connected neural networks (FCNN). A fully connected neural network consists of a set of fully connected layers. FCNNs are structure-independent deep learning networks and are called “deep” networks. FCNNs have a generic architecture with “universal approximators” capable of learning any function. Fully-connected neural networks can be stacked directly. For a Fully-connected neural networks, when processing a 300 × 80 pixel photo, 48,000 neurons are needed in the input layer. Denote the entrance to the fully connected layer x, for the output of a fully connected layer we denote y, which is calculated by the formula: yi = σ (w1 x1 + w2 x2 + ... + wm xm ) where σ - nonlinear function, wi - trainable parameters in FCNN. The total output is calculated by the formula: σ w1,1 x1 + w1,2 x1 + ... + w1,m xm y= σ wn,1 x1 + wn,2 x1 + ... + wn,m xm
(2)
(3)
In fully connected layers, operations are performed on vector structures and the equation for one neuron is very similar to the dot product of two vectors. For efficiency
A Complex Method for Recognizing Car Numbers with Preliminary Hashing
205
purposes, for a layer of neurons, y is computed as a matrix multiplication: y = σ (wx)
(4)
where σ - matrix of nonlinearities. The practical application of FCNN for the recognition task on the considered data sets showed high performance and acceptable recognition accuracy.
3 Results of the Computational Experiment This section presents a description of the computational experiment, which consists in training and recognition on three data sets, sized 200, 500 and 800 photos of car numbers from the camera. After obtaining the hash code of the photos, we applied Fully-connected neural networks. Figure 3 shows the results of training time for data sets of photos of car numbers from the camera.
Fig. 3. The training time.
Figure 3 shows the shortest training time for the presented method with preprocessing and the longest time for the recurrent neural network. The presented hashing method is 8% faster in training time than Fully Connected, 40% faster than Convolutional and 80% faster than Recurrent neural network. The presented hashing method is 8% faster in training time than Fully Connected, 40% faster than Convolutional and 80% faster than Recurrent neural network. Figure 4 shows the results of the evaluation time for the photo of car numbers from the camera. Figure 4 shows the shortest evaluation time for the presented method with preprocessing and the longest time for the recurrent neural network. The presented method with hashing is 50% faster in computation time than Fully Connected, 60% faster than Convolutional and 800% faster than Recurrent neural network. Figure 5 shows the results of recognition accuracy for photos of car numbers from the camera. Figure 5 shows the results of recognition accuracy for photos of car numbers from the camera by the presented method with preprocessing above 92%.
206
S. Ivanov et al.
Fig. 4. The evaluation time.
Fig. 5. The accuracy.
Fig. 6. The memory used during training.
Figure 6 shows the amount of memory used when training neural networks. Figure 6 shows the smallest amount of memory used during training for the presented method with preprocessing and FCNN, and the largest size for the recurrent neural network. The presented method with hashing requires 2 times less memory when training neural networks than the Recurrent neural network requires. Figure 7 shows the amount of memory used to store the classifier.
A Complex Method for Recognizing Car Numbers with Preliminary Hashing
207
Fig. 7. The memory needed to store the classifier.
Figure 7 shows the smallest memory size for storing the classifier for the presented method with preprocessing and the largest size for the recurrent neural network. The presented method with hashing requires memory to store the classifier several times less than the Recurrent neural network requires. Calculations and verification were carried out using a mathematical package of symbolic calculations using a functional programming language. The results of the computational experiment using a mathematical software package show the applicability of the method with preliminary hashing for recognizing a car number plate from a camera with high accuracy.
4 Conclusion In the study, the authors consider the problem of recognizing photo numbers of cars from a camera. Various types of neural network are considered: "fully connected" neural network, "convolutional" and "recurrent" neural network. A new method is proposed, which consists in pre-processing photos using hashing. The hash calculation for the car number photo is based on the polynomial hashing of the matrix. The results of the experiment carried out on three data sets of 200, 500 and 800 photos of car numbers from the camera show an equal high recognition accuracy, and a higher learning and calculation speed for the proposed method with preprocessing. The presented method with hashing is 50% faster in computation time than Fully Connected, 60% faster than Convolutional and 800% faster than Recurrent neural network. The presented hashing method is 8% faster in training time than Fully Connected, 40% faster than Convolutional and 80% faster than Recurrent neural network. The experiment showed the smallest amount of memory used during training for the presented method with preprocessing, and the largest size for the recurrent neural network. Calculations and verification were carried out using a mathematical package of symbolic calculations using a functional programming language. The results of the computational experiment using a mathematical software package show the applicability of the method with preliminary hashing for recognizing a car number plate from a camera with high accuracy.
208
S. Ivanov et al.
References 1. Samek, W., Montavon, G., Lapuschkin, S., Anders, C.J., Müller, K.R.: Explaining deep neural networks and beyond: A review of methods and applications. Proc. IEEE 109(3), 247–278 (2021) 2. Hancock, J.T., Khoshgoftaar, T.M.: Survey on categorical data for neural networks. J. Big Data 7(1), 1–41 (2020). https://doi.org/10.1186/s40537-020-00305-w 3. Li, Z., Liu, F., Yang, W., Peng, S., Zhou, J.: A survey of convolutional neural networks: analysis, applications, and prospects. IEEE Trans. Neural Netw. Learn. Syst. (2021) 4. Wang, J., Chen, Y., Chakraborty, R., Yu, S.X.: Orthogonal convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11505–11515 (2020) 5. Xu, K., Li, J., Zhang, M., Du, S.S., Kawarabayashi, K.I., Jegelka, S.: What can neural networks reason about? arXiv preprint arXiv:1905.13211. (2019) 6. Xie, S., Kirillov, A., Girshick, R., He, K.: Exploring randomly wired neural networks for image recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1284–1293 (2019) 7. Tian, Y.: Artificial intelligence image recognition method based on convolutional neural network algorithm. IEEE Access 8, 125731–125744 (2020) 8. Henderson, M., Shakya, S., Pradhan, S., Cook, T.: Quanvolutional neural networks: powering image recognition with quantum circuits. Quantum Mach. Intell. 2(1), 1–9 (2020). https:// doi.org/10.1007/s42484-020-00012-y 9. Ohri, K., Kumar, M.: Review on self-supervised image recognition using deep neural networks. Knowl.-Based Syst. 224, 107090 (2021) 10. Wang, Y., Dang, L., Ren, J.: Forest fire image recognition based on convolutional neural network. J. Algorit. Comput. Technol. 13, 1748302619887689 (2019) 11. Fu, Y., Aldrich, C.: Flotation froth image recognition with convolutional neural networks. Miner. Eng. 132, 183–190 (2019) 12. Al-Azzeh, J., Alqadi, Z., Abuzalata, M.: Performance analysis of artificial neural networks used for color image recognition and retrieving. Int. J. Comput. Sci. Mob. Comput. 8(2), 20–33 (2019) 13. Alber, M., et al.: Investigate neural networks! J. Mach. Learn. Res. 20(93), 1–8 (2019) 14. Öztürk, S.: ¸ Image inpainting based compact hash code learning using modified U-Net. In: 2020 4th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), pp. 1–5. IEEE (2020) 15. Song, W., Gao, Z., Dian, R., Ghamisi, P., Zhang, Y., Benediktsson, J.A.: Asymmetric hash code learning for remote sensing image retrieval. IEEE Trans. Geosci. Remote Sens. 60, 1–14 (2022) 16. Liu, C., Ma, J., Tang, X., Zhang, X., Jiao, L.: Adversarial hash-code learning for remote sensing image retrieval. In: IGARSS 2019–2019 IEEE International Geoscience Remote Sensing Symposium, pp. 4324–4327. IEEE (2019) 17. Öztürk, S: ¸ Class-driven content-based medical image retrieval using hash codes of deep features. Biomed. Signal Process. Control 68, 102601 (2021) 18. Wei, X.S., Shen, Y., Sun, X., Ye, H.J., Yang, J.: A2-net: learning attribute-aware hash codes for large-scale fine-grained image retrieval. Adv. Neural. Inf. Process. Syst. 34, 5720–5730 (2021) 19. Cheng, S., Wang, L., Du, A.: An adaptive and asymmetric residual hash for fast image retrieval. IEEE Access 7, 78942–78953 (2019) 20. Televnoy, A., Ivanov, S., Zudilova, T., Voitiuk, T.: Neural ODE machine learning method with embedded numerical method. In: 28th Conference of Open Innovations Association (FRUCT), pp. 451–457 (2021)
Data Processing by Genetic and Pattern Search Algorithms B. Z. Belashev(B) Karelian Research Center of RAS Institute of Geology, Pushkinskaya st. 11, Petrozavodsk 185910, Russia [email protected]
Abstract. The goal of paper is to reveal the probable structure of the blurred distributions and estimation of their parameters. Genetic and pattern search algorithms together with maximum entropy and least square methods were applied to decomposition of complex distribution on the components. Algorithms were tested on the models and were used for determination the structures of complex IR spectra of minerals and X-ray diagrams of amorphous materials and also for selection of geophysics anomalies. #CSOC1120. Keywords: Genetic algorithm · Patternsearch algorithm · Maximum entropy method · Least squares method · Convergence · Distribution · Spectra · Geophysical anomalies
1 Introduction The subject of this paper is data processing based on variation methods. In contrast to differential or difference equations which describe the time course of processes, variation methods are used to determine the states established consistent with extreme functional values. Such states are analyzed to interpret the results of the modeling of complex systems. Extreme models are known in physics, mechanics, thermodynamics, biology, economics and management theory [1–3]. In the paper, the maximum entropy and the least square methods have been used to decompose experimental distribution into components. Global optimization algorithms have been employed to calculate the number of such components and to estimate their parameters. An extreme problem is commonly formulated by introducing an arbitrary functional in addition to a target function which contains equations describing initial distribution and inequalities superimposed on variables. Limitations are present additively in the arbitrary functional and are weighed by the Lagrange multiplier. Equating the first variations of the arbitrary functional by means of the function required to zero gives equations that can be used to express the function required by means of the Lagrange multipliers. The Lagrange multipliers are estimated by substituting the function into the statement of the problem and solving the corresponding system of equations [4]. For the algorithm to start © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 209–216, 2023. https://doi.org/10.1007/978-3-031-35311-6_23
210
B. Z. Belashev
working, the functions should be differentiable. Its complexity is preset by calculation stages. There is only one solution to the extreme problem for convex functionals and the data that meet the statement [5]. The optimization of functionals with several extremes by a conventional method may result in untimely convergence. In this case, solution in the form of a local extreme may differ considerably from a global extreme. In such a case, the only solution to the problem, close to the global extreme of the function, is preferable. Many of the algorithms that give such a solution are not yet widely used in data processing. The goal of the paper is to demonstrate the operation of global optimization algorithms based on genetic algorithms and standard search [5] to reveal hidden distribution structures in material science and geophysics. Simulated and real examples were used to reconstruct the most probable structures of complex spectra with these algorithms. The information obtained from reconstructions can be used to approach problems in oscillation spectroscopy, the amorphous state of matter and selection of potential field anomalies.
2 Data Processing Algorithms and Methods 2.1 Genetic Algorithm Genetic algorithms [6], based on Charles Darwin’s idea of natural selection – perfecting a species by transferring the best genes to our descendants, provide an example of stochastic global optimization. The function of the degree of environmental adaptation of individuals is assessed on many individual chromosomes – a sequence of units and zeros that represent numbers in Gray’s code. The smaller the function value, the better an individual is adapted to the environment. A genetic algorithm searches for the global minimum of this function, beginning with an arbitrary combination of individuals understood as a population. At each iteration, individuals are combined to form pairs, and descendants are produced by crossing-over – exchange of chromosome tails and mutation – the random value inversion in a random number rank. Selecting individuals for a new population is based on the degree of adaptation of descendants and parents. The algorithm converges if a new population does not differ from the previous one. Genetic algorithms have several advantages: functions need not be continuous and differentiable; the algorithms are insensitive to being locally minimum; multi-criterion optimization is possible, convergence can be accelerated repeatedly in comparison with random search, and the algorithms are easy to use. Genetic algorithms have several disadvantages: biological terminology, which is hard to understand, and inaccurate estimation of the global extreme which is increased, performing the algorithm several times and selecting the extreme value with the best function of the degree of adaptation. 2.2 Pattern Search Algorithm A pattern search algorithm, understood as a set of points in the form of the peaks of an n-dimensional cube which expands and contracts, depending on whether the pattern
Data Processing by Genetic and Pattern Search Algorithms
211
point has a smaller value than the current function value, is less laborious. The minimum size of the pattern is the basis for terminating the search. In the MATLAB computer mathematics system the above algorithms are performed using ga and patternsearch procedures [7]. 2.3 Reconstruction of the Most Probable Blurred Distribution Structure The above algorithms were used to reconstruct signal and noise structure by the maximum entropy method. Signal si and known blurring function hij are employed to restore distribution function xi and additive noise xi+n connected by the relationship: si =
n
h ij xj + xi+n
(1)
j=1
where x i , x i+n > 0, i,j = 1,2,..n and noise is represented by a sequence of positive random numbers. The solution which meets the maximum entropy functional −
n i=1
xi lnxi − ρ
n
xi+n lnxi+n →
max
(2)
i=1
and constraints (1) is most random and most probable [8]. Parameter ρ controls the contribution of noise entropy in the functional (2). Simulated signal si was obtained by blurring two δ -peaks with the amplitudes 0.5 and 0.7 function hij = exp(−0.25(i − j)2 )
(3)
with the addition of noise in the range (0.0.1). The statement of the problem and the inequalities which preset the variation ranges of the variables were introduced directly into the command line of the ga and patternsearch algorithms. The results of the reconstruction (Fig. 1a, b), obtained without using the Lagrange multipliers, show the similarity of the initial model data and the estimates obtained. In the examples discussed, one needs to know blurring function hij to reconstruct the blurred distribution structure. When this function is unknown, its form is selected for additional reasons. Its common form in spectroscopy is lorentzian or gaussian. Considering that the convolution of lorentzian with lorentzian gives lorentzian and that of gaussian with gaussian gives gaussian, a procedure with the function hij thus selected with a width presumably smaller than the width of the narrowest distribution component was applied to experimental distribution at stage I. As a result, modes indicating the possible number of distribution components and the variation ranges of their parameters are reflected in the distribution estimate [8]. This information was used to search for the parameters of components performed by minimizing the discrepancy of experimental and model distributions. The results of the least square method for two components: are shown in Fig. 2.
212
B. Z. Belashev
Fig. 1. Results of the reconstruction of signal and noise structure by the maximum entropy method using the global optimization algorithms ga (a) and patternsearch (b) of the MATLAB computer mathematics system: - initial δ -peaks (+), data (.), data estimate (o), function (*) and noise () estimates. Parameter ρ = 18.
n i=1
(si − A1 exp(−(
i − t1 2 i − t2 2 2 ) ) − A2 exp(−( ) )) → min g1 g2
(4)
Fig. 2. Fitting the model parameters by the least square method using the patternsearch algorithm:1- initial δ -peaks (+); 2- data (.), 3 - data estimate (o), 4,5- component estimates (, ♦).
Analysis of the values of the function of the degree of adaptation and model parameters in the various runs of the algorithms has shown that selecting results with the smallest values of the degree of adaptation is reasonable. The estimates of the mean values and variances of the parameters upon the various realizations of the algorithms give
Data Processing by Genetic and Pattern Search Algorithms
213
parameter errors related to an optimization process. Parameter errors, which depend on measurement errors, were estimated by handling the initial distribution in the corridor of the three mean square deviations of the charge content of the histogram and by averaging the model parameters obtained upon realization.
3 Examples of the Reconstruction of Blurred Spectrum and Distribution Structures The problem of decomposing complex distribution contours into components arises in various fields of research. In material science, it is encountered because there are blurred spectra and X-ray diffraction patterns of materials that are hard to analyze by conventional methods. Figure 3 shows the results of the decomposition of the IR-spectrum of kyanite powder (a) and the X-ray diffraction patterns of distilled water (b) into Gaussian components obtained by the patternsearch algorithm [9]. The same method was used preliminarily to determine distribution modes and the possible variation ranges of their parameters. The IR-spectrum of kyanite powder was recorded on a Specord M 80 spectrophotometer. The processing performed has confirmed the real complexity of the data, revealed their structure and demonstrated the ability of the optimization algorithms to operate with a large number of components. On the other hand, a problem in interpreting, imparting a physical meaning to the results and identifying the resulting components as a certain type of variations or structural characteristics arose.
Fig. 3. Decomposition into components of the IR-spectrum of kyanite powder (a) and the X-ray diffraction pattern of distilled water (b) using the patternsearch algorithm.
One demonstrative example is a water problem. In spite of many attempts to study water by modern methods [10], its structure is still unclear. The complexity of the problem is partly due to the inadequacy of the methods employed. For example, the short-range order of the radial distribution functions of electron density, reflecting the contribution of various coordination spheres [11], is used in analysis without considering the directivity of hydrogen bonds characteristic of water.The approach proposed does not have this disadvantage. The analogues of interplanar distances d, calculated from the positions of
214
B. Z. Belashev
the components of the diffraction patterns (Fig. 3b) of water using the formula d=2 π /s, are shown in Table 1. The Analogues, Which Coincided with the Distances Between the Nearest Hydrogen-Bound and Non-hydrogen-Bound Water Molecules in Hexagonal Ice I [12], are shown in bold type. The small contribution of these components to the X-ray diffraction pattern indicates that clathrate models [10] fail to fully explain water structure. Table 1. The components positions of X-ray of water and analogues of interplanar distances No
1
2
3
4
5
6
7
8
9
10
11
12
13
s,A−1
0,98 1,22 1,40 1,75 2,00 2,20 2,30 2,40 2,60 2,80 3,00 3,20 3,40
d,A
6,43 5,10 4,49 3,59 3,14 2,86 2,73 2,62 2,42 2,24 2,09 1,96 1,85
The examples discussed contain positive distribution components. Distributions with different-sign components were tested to expand the application field of global optimization algorithms. This situation arises typically when selecting potential field anomalies in geophysics. In this case, the efficiency of global optimization algorithms corroborates the decomposition of a simulated gravity profile into components (Fig. 4).
Fig. 4. Decomposition of a simulated gravity profile into components using the patternsearch algorithm: A1e = -1.0 (A1 = -1.2); g1e = 3.6 (g1 = 4.0); L1e = 4.0 (L1 = 4.0); A2e = 2.2 (A2 = 2.1); g2e = 3.1 (g2 = 3.0); L2e = 7.0 (L2 = 7.0). The discrepancy between the initial contour and the contour constructed from the reconstructed components is 0.00002.
When analyzing a gravity profile, the parameter which presets component width, estimates the depth of occurrence of an anomaly source. Gravity anomalies are hard to
Data Processing by Genetic and Pattern Search Algorithms
215
decompose by conventional methods when their sources are near one vertical line [13]. The global optimization algorithms, used for selecting planar model gravity distributions (Fig. 5), are quite acceptable for this purpose.
Fig. 5. Reconstructing gravimetric data using the patternsearch algorithm and revealing negative and positive anomalies with sources lying near one vertical line: model data (a) and areal distributions of the first (c) and second (e) components and their 3D-form (b, d, f) respectively.
As gravimetric and magnetometric data [14] were processed using the same method, the global optimization algorithms can be employed to select magnetic anomalies.
216
B. Z. Belashev
4 Conclusion The cryptic distribution structures of random values in various fields were reconstructed using genetic algorithms and the pattern search algorithm adapted to the maximum entropy and the least square methods. Euristic optimization algorithms have a wellunderstood basis; they are not detailed and their untimely convergence is unlikely; they provide the only solution close to the global extreme of the problem; to perform the algorithms, functions need not be continuous and differentiable; they do not use Lagrange multipliers and admit the multi-parametric optimization of functions; they are used to obtain stable and most probable distribution structure estimates; they are adequate to the physical nature of study targets. Their application improves the quality of the results and increases the efficiency of data processing.
References 1. Polak, L.S.: Variations principles of mechanics, their development and application to physics. LIBROCOM, Moscaw (2010) 2. Fursova, P.V., Levich, A.P., Alexeyev, V.L.: Extreme principles in mathematical biology. Uspekhi v sovremennoi biologii 123, 115–137 (2003) 3. Kopcik, V.A.: Extreme principles of information-synergetic evolution. https://spkurd yumov.ru/globalization/ekstremalnye-principy-informacionno-sinergeticheskoj-evolyucii/. Accessed 24 Nov 2022 4. Kalman, D.: Leveling with lagrange: an alternate view of constrained optimization. Math. Mag. 82(3), 186–196 (2009) 5. Greshilov, A.A.: Applied problems in mathematical programming: Teaching manual, Logos, Moscaw (2006) 6. Holland, J.H.: Adaptation in natural and artificial systems. University of MichiganPress Ann Arbor, USA (1975) 7. Panchenko, T.V.: Genetic algorithms. Astrakhan University Press, Astrakhan (2007) 8. http://matlab.exponenta.ru/genalg/08_03_03.php. Accessed 24 Nov 2022 9. Belashev, B.Z.: Methods to reveal hidden structures of signals and their applications. Vestnik RUDN 3(2), 132–135 (2010) 10. Alyoshina, L.A., Lyukhanova, I.V.: X-ray diffraction study of technical cellulose-water interaction. Uchenye zapiski Petrozavodskogo gosudarstvennogo universiteta 6, 55–59 (2012) 11. Zakharov, S.D., Mosyagina, I.V.: Cluster water structure. Preprint FIAN, Moscaw (2011) 12. Cochran, T.W., Chiew.Y.C.: Radial distribution function of freely jointed hard-sphere chains in the solid phase. J. Chem. Phys. 124(7), P074901 (2006) 13. Zatsepina, G.N.: Physical properties and structure of water. MGU, Moscaw (1998) 14. Bychkov S.G.: Estimating the depth of anomaly-forming sources in a system VECTOR. In: Proceedings: International School-Seminar “Theoretical and Practical Problems in Geological Interpretation of Gravity, Magnetic and Electrical Fields, pp. 17–18, OIFZ RAS, Apatity (2002) 15. Blokh Yu, I.: Detecting and separating gravity and magnetic anomalies. MGGA, Moscaw (1995)
A Method for Collecting and Consolidating Big Data on the Requirements of Employers for the Competencies of Specialists to Actualize Educational Programs and Resources Mikhail Deev(B)
, Alexey Finogeev , Alexander Grushevsky , and Ivan Igoshin
Penza State University, Penza, Russia [email protected]
Abstract. The article deals with the search, collection and analysis of big data on the requirements of employers for the competencies of specialists in the digital economy, taking into account regional characteristics. A method is proposed for extracting and integrating competency descriptions from offered vacancies in open Internet sources using search crawlers. Descriptions of competencies are presented as vector models of keywords. The method makes it possible to formalize descriptions of a set of demanded vacancies for subsequent clustering, geospatial analysis, comparative analysis with vector models of competencies descriptions from educational programs offered by educational institutions in the regions. The method of clustering vector models is synthesized using the apparatus of fuzzy logic and neural networks. A methodology has been developed for evaluating educational programs and resources in educational institutions of the region in order to make decisions about the need for their modernization and actualizating. The structure of the system for collecting, consolidating and intellectual analysis of formalized descriptions of the requirements of employers from the real sector of the regional economy is proposed. Consolidated data is used to actualize specialist training programs and modernize educational resources, to set up a personalized educational environment. The system is an intellectual component of the information and educational environment of a higher educational institution in the region. #CSOC1120. Keywords: big data · competencies · vector models · cluster analysis · education · smart learning environment · educational program · competence
1 Introduction The modern trend in the evolution of the education system includes the transition to convergent education. The convergent approach means the convergence of educational trajectories of different specialties in accordance with the converging requirements of professional standards and employers [1]. The process of digitalization of the economy © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 217–228, 2023. https://doi.org/10.1007/978-3-031-35311-6_24
218
M. Deev et al.
has led to the fact that the set of competencies of specialists in various professions should include competencies from the field of information, telecommunication and computing technologies [2, 3]. The result is the convergence of educational programs, the use of universal educational content, similar teaching methods and technologies. The model of convergent education is based on the convergence of competencies in professional and educational standards [4]. The implementation of the convergent approach requires the integration of educational resources in the information and educational environment for adaptive adjustment of the process of personalized training of specialists to the competence requirements of employers in the regional labor markets. An intellectual analysis of vacancies in demand in the region can show that employers require competencies that are not formed in the educational programs of regional educational institutions. This is typical, first of all, for the field of rapidly developing technologies, for example, in medicine, genetic engineering, computer science, etc. [5, 6]. The consequence is the need for operational actualizating of educational programs offered in educational institutions, which is solved by managing their life cycles [7]. The system of collection, consolidation and intellectual analysis of employers requirements acts as an intellectual mechanism for actualizating educational programs and content in the information and educational environment [8]. This component includes a set of software tools that automate the processing of big data using cloud technologies. Analytical support for adaptive adjustment processes is implemented on the basis of consolidated data obtained in the process of monitoring the current state of regional labor markets and analyzing changes in demanded competencies. The main functions of the system include: – collection, extraction and consolidation of information about the requirements of employers and sets of necessary competencies in the regions; – collection, extraction and analysis of requirements for the competencies of specialists from educational standards; – verification of educational programs of educational institutions in the region for compliance with the requirements of standards and employers; – formalization of competence requirements for subsequent clustering; – comparative analysis of competency-based matrices of educational programs for training specialists from different fields of knowledge and assessment of the degree of their similarity; – comparative analysis of competency requirements of employers with competency matrices of educational programs with an assessment of the degree of their similarity; – clustering of vector models of keywords of descriptions of competencies extracted from educational programs and demanded vacancies; – ranking of vector models within clusters for making decisions on the choice of educational programs with the maximum degree of similarity of competence models; – selection of educational content for personalized training of specialists in selected educational programs.
A Method for Collecting and Consolidating Big Data
219
2 Methods and Results 2.1 Method for Collecting, Consolidating and Formalizing Competency Descriptions The intellectual educational environment includes a mechanism for adapting educational programs to the requirements of labor markets by actualizating and correcting educational components. Such a mechanism is necessary to identify and analyze trends in terms of changing requirements for sought-after competencies of specialists at the regional level in the context of the transition to a digital economy. The adaptation system works with big data obtained from open sources on the Internet. Big data is information about vacancies, the current state of regional labor markets, and the requirements of the real sector of the economy [9, 10]. The sources of information are: 1) job and personnel search sites, 2) labor exchanges, 3) individual sites of large firms and companies, 4) regional bulletin boards, 5) aggregator sites for offers and job search, job databases, 6) freelance exchanges, 7) social groups, communities, forums with job offers. Open access to the described sites and platforms allows you to constantly monitor and analyze the requirements for the skills, abilities and knowledge of specialists to perform labor functions in accordance with professional standards [11, 12]. In fact, a set of sources is a distributed knowledge base that is used to extract big data. The data are necessary for actualizating educational programs and electronic educational resources (content), building individual trajectories for training or retraining specialists [13, 14]. In the course of research, a technology for working with online Internet resources was developed. Search agents (crawlers) and parsing modules are used to collect data from sources. They collect job postings posted on the websites of recruitment agencies, enterprises, social networks and instant messengers, etc. There are several interfaces that allow you to retrieve messages based on requests or by setting up PUSH notifications when they arrive. When a new message appears, it is retrieved and stored in the database as a text file. To collect and process information about vacancies, the system implements: – search crawlers (data collection agent from social networks, data collection agent from instant messengers, data collection agent from websites); – libraries for connecting to specific networks and messengers; – a module for recognition and consolidation of data on vacancies; – a module for checking messages for duplicates and filtering by spatial and temporal data; – parsing module for extracting keywords according to the dictionary; – a module for recognition and extraction of keywords describing competencies for entry into the dictionary; – a module for lexical analysis and transformation of texts into a vector model; – clustering module for vector competency models; – a module for extracting similar sets of word vectors [15] and comparative analysis of selected sets for educational programs and employers vacancies. Crawlers are developed in Python using the Scrapy and Scrapy-proxies libraries and implement the logic of standard search robots. Crawlers monitor websites, social
220
M. Deev et al.
networks, and other sources by storing job vacancies in XML files. When working with social networks (communities, groups), they track the appearance of new messages, topics, reposts. The parsing modules use the Tomita parser technology, which is used to extract structured data from natural language text. The information retrieval process is carried out on the basis of context-free grammars and keyword dictionaries. The Tomita-parser technology makes it possible to obtain formalized descriptions of competencies from unstructured information. Other means of interaction between representatives of the real sector of the economy and universities can also be used to collect and consolidate descriptions of competencies. In particular, these include applications for surveying employers, in which they can indicate the required demand for certain professions, indicating the necessary skills of specialists. When consolidating data on the requirements of employers, the stages of formalizing the extracted information are implemented. In the process of data processing, the tasks of cleaning the input stream of records from duplicates, information noise, and binding to spatial and temporal marks are solved. Data cleansing also involves replacing or removing various abbreviations to formalize competency descriptions. Binding to spatial and temporal labels allows you to ensure integrity, eliminate duplicates and determine the frequency of occurrence of similar publications. To obtain coordinates at the address of the employer, open geoinformation services (open street map, Yandex.map API) are used. Timestamps are the dates when job postings were created or modified, while geospatial tags are employer location data. Geospatial analysis of vacancies makes it possible to correlate the level of competencies in demand in the region with the competencies of graduates of regional universities in order to identify the risks of obtaining outdated education. Temporal data analysis makes it possible to assess the economic feasibility of actualizating educational programs and resources in the region, taking into account the presented time intervals, or to make a decision on the training of specialists with the required competencies in neighboring regions. At the same time, when predicting and adapting educational content, it is necessary to take into account the demand for future specialists for several years to reduce costs and improve the quality of training. Geospatial visualization of vacancies on the map also allows you to build graphs of the regional distribution of employers requirements for spatial analysis of the density of distribution of vacancies, the competencies of the required specialists in subject areas, bottlenecks regarding the lack of necessary specialists and educational programs, and other characteristics. At the next stage of big data processing, competence requirements are formalized through the transition to vector models. The vector model is a numerical definition of a set of semantically related words and phrases. The use of keyword vectors allows you to save space for storing large data about vacancies and employer requirements, as well as efficiently index and analyze the resulting data. During the synthesis of keyword vectors, metadata about groups of vacancies is extracted. A structured vector representation is also necessary for the synthesis of a graph model of vectors. The graph model allows you to select isomorphic subgraphs that represent formalized descriptions of similar competencies.
A Method for Collecting and Consolidating Big Data
221
The main difficulty in the process of formalization is that job descriptions are presented online by employers and agencies in natural language in free form. To solve the problem, keyword dictionaries are used, with the help of which formalized descriptions are compiled. If there are no suitable words in the vocabulary to describe the competency, a new element is added as a new entity. The application of the updated keyword dictionary includes the following steps: 1. an empty dictionary is created; 2. for the extracted information, the following actions are performed: a. the text is divided into words or phrases; b. for each word, its close match in the dictionary is searched; c. in the presence of a word, its weight increases by 1; d. in the absence of a word, it is added to the dictionary with a weight equal to 1; 3. after the selection of keywords with weights (the number of repetitions in the description texts), the synthesis of the vector description model is performed. The methodology for synthesizing vector description models is based on the Wordto-Vec algorithm and contains the following steps: 1. creation of tuples of text descriptions in a certain format - the values of the input and output words are used for designation, encoded using the one-hot algorithm; 2. creating a learning model using the vectors obtained in step 1; 3. evaluation of the learning outcomes of the model using the loss function for correctly processed words; 4. definition of qualitative and quantitative metrics of vector models. 2.2 Method of Fuzzy Clustering of Competence Vectors At the next stage, an intellectual analysis of sets of vacancy competency vectors and educational programs is carried out to identify irrelevant or unclaimed educational content at a given time in order to actualize and adapt it to the requirements of regional labor markets. Such an analysis is necessary to detect obsolete educational programs and. Related educational materials, the use of which in the training of specialists can lead to a low-quality or unclaimed level of education in the region [16, 17]. Educational programs are associated with educational and methodological content according to a one-to-many relationship, since the same educational resources can be used in various specialties and training plans. For the synthesized vector models, the problem of preparing competency vectors for clustering is solved. Preparation steps include: 1. cleaning data from special characters and punctuation marks and parsing phrases by words; 2. checking words for signs of “stop words” and “error words” using specialized dictionaries, and in case of matches, their filtering; 3. selection of competency vectors from job descriptions of employers to determine the initial centroids of clusters;
222
M. Deev et al.
4. cross-validation of selected vectors, which determines the number of word matches in vector models and the choice of vectors with the maximum number of matches to be assigned as the initial centroid of the cluster. Clustering of competency vectors is implemented using fuzzy logic and a neural network [18]. To train a neural network, sets of competency vectors from existing educational programs that are currently used [19, 20]. The membership function of competence vectors to clusters is adjusted in the process of network training, and the conclusion about the degree of similarity with cluster centroids is formed using the fuzzy logic apparatus. Clustering uses the primary set of centroids specified during preparation for clustering. The initial data are: 1. Sets of competency vectors from educational programs of educational institutions in the region, grouped by areas of study and specialties within the framework of undergraduate and graduate programs. 2. Sets of competency vectors from educational programs of educational institutions of neighboring regions, grouped by areas of training and specialties within the framework of undergraduate and graduate programs. 3. Sets of retrospective vectors of competencies from employer vacancies obtained in the course of previous searches, but not more than 5 previous years, grouped by areas of training and specialties. 4. Sets of competency vectors from current vacancies of employers obtained from open sources over the past month, grouped by areas of training and specialties. The sets of competency vectors from groups 1 and 3 are used to train the clustering algorithm, and the set from group 2 is used for extended clustering of those vectors from group 4 that do not fall into clusters that include vectors from group 1. The latter means that the educational programs offered by educational institutions of the region do not correspond to the demanded vacancies of employers in it, which requires the search for such programs in educational institutions of neighboring regions and/or actualizating regional educational programs. Clustering is based on the fuzzy c-means algorithm, which includes the following steps: 1. The number of competency clusters T is determined, which is corrected in the learning process, the degree of fuzziness of the objective function s > 1 is set. 2. The input sets of competency vectors represent feature vectors X j ( j = 1,…,N). The vector defines a point in space that can belong to clusters with centroids CL (k) (k = 1,…,T ) and a probability membership function (k)
(k)
μXj , where 0 < μXj < 1,
T k=1
(k)
μXj = 1,
(1)
which denotes the degree of similarity with the centroid and is given as the distance (k) RSXj . 3. At the first stage, points are distributed among clusters according to the matrix of degrees of similarity with centroids in the feature space.
A Method for Collecting and Consolidating Big Data
223
4. The coordinates of the centroids of the clusters CL k (k = 1,…,T ) are determined by calculating the average proximity of the cluster points: T (k) k=1 μXj Xj CLk = (2) (k) T k=1 μXj 5. Distances between points and centroids of clusters are calculated: 2 (k) RSXj = Xj − CL(k)
(3)
6. The degrees of belonging of points to clusters are recalculated with updating the matrix of degrees of similarity: 1
(k)
μXj =
T k=1
(k) RSX i (k) RSX j
2/ s−1
,
(4)
where s > 1 is the clustering fuzziness factor. 7. To stop the iterative process, the parameter ε > 0 is set. If the condition:
(k) (k−1) μXj − μXj < ε
(5)
is not satisfied, then the transition to stage 5. The network includes five layers without feedback with weight coefficients and activation functions. The Takagi-Sugeno-Kanga (TSK) adaptive type model was used to implement the clustering algorithm. The output signal is determined by the aggregation function for T rules and N variables (N = 4): T (wk ∗yk (x)) y(x) = k=1 (6) T k=1 (wk ) , where yk (x) = zk0
N zkj xj
(7)
j=1
- is the ith component of the approximation, the weights wi represent the degree of (k) (k) fulfillment of the conditions wk= μA (x j ). The membership function μA for the variable x j is represented by the Gaussian function. Output variables output rules Y = (y1 ,y2 ,…,yT ) for the set of variables X = (k) (x 1 ,x 2 ,…,x N ) taking on the set of values Aj represents a matrix of values of membership functions of size N × T: (1)
(1)
(1)
R1 : if x1 ∈ A1 and x2
(1)
(1)
(1)
∈ A2 and, · · · , and xN ∈ AN , then y1 (x) = z10
N z1j xj , j=1
(8)
224
M. Deev et al. (T )
RT : if x1
(T )
(T )
∈ A1 and x2
(T )
(T )
(T )
∈ A2 and, . . . , and xN ∈ AN , then yT (x) = zT 0
N zTj xj j=1
(9) In the first layer, the membership functions for each variable x i are determined by the formula:
μ(k) xj =
1 1+
(k)
xj −ci
2b(k) ,
(10)
i
(k)
σi
(k)
In the second layer, the coefficients wk = μA (x) are determined by aggregating the values of the variables x i . The coefficients wk are transferred to the 3rd layer, where they are multiplied by the values yi (x), and also to the fourth layer to calculate the sum of the weights. On the third layer, the values yi (x) = zk0 N j=1 zkj ∗xj , are calculated and multiplied by the weight coefficients wk. The fourth layer contains two neurons to consolidate the results. On the fifth layer, which includes one neuron, the weights are normalized and the values of the output function are calculated. The parameter values for the first and third layers are selected at the training stage. The parameters of the first layer are considered non-linear, and the parameters of the third layer are considered linear. The training is done in two steps. At the first step, the parameters of the membership functions of the third layer are selected by fixing individual values of the parameters and solving a system of linear equations. The output variables are replaced by reference values GP (P is the number of training samples). The system of equations can be written in matrix form: GP = W *Z. The solution of the system of equations is found by means of the pseudo-inverse matrix W + : Z = W + GP . Further, after fixing the values of the linear parameters zkj , the vector Y of the actual output variables is calculated and the error vector E = Y - GP is determined. At the second step, the errors are sent back to the first layer, where the parameters of the gradient vector of the objective membership function are calculated relative to the parameters of the first layer. The next step is to check and correct the parameters of the membership functions using the fast descent gradient method. After completion of the correction of the nonlinear parameters, the adaptation and synchronization of the linear and nonlinear parameters is carried out. Validation and replacement of incorrect parameters are iterative processes, the repetition of which ends when the values stabilize. To determine the degree of discrepancy between the competence vectors of educational programs and vacancies, three intervals are set for the degree of similarity of the vectors: a) 67–100 (%), b) 34–66 (%), c) 0–33 (%). An educational program that falls into the third interval is considered not to meet the competencies demanded in the regional labor market, according to the vacancies found. The training of a specialist in this program at the current moment may lead to the risk of receiving an unqualified level of training. However, this may also mean that there is an overabundance of specialists of this qualification in the regional labor market at a given moment in time and a lack of demand for them. However, in the future, such
A Method for Collecting and Consolidating Big Data
225
a demand may appear, so it cannot be completely abandoned. In addition, it may turn out that in another region at a given time such specialists are in demand. Therefore, the competence vectors of this program are further used in extended clustering in the process of analyzing vacancies from these regions, and they are placed in the second group. If the competence vectors of the educational program fall into the first interval, this means that this program is in full demand in the regional labor market and the training of specialists for it is relevant at the current time. The falling of vectors into the second interval means that the program partially allows forming the required competencies of a specialist for the regional labor market. However, the educational program requires changes, i.e. a decision is made on the need to actualize it. At the same time, recommendations are generated for actualizating, which indicate mismatched or currently irrelevant components of competency vectors in the form of sets of keywords that describe knowledge, skills and abilities from the educational program, which should be replaced with the required knowledge, skills and skills from employers vacancies. Sets of vector models are compiled not only for clustering the competency vectors of educational programs and the competency vectors of employers vacancies, but also for clustering the vectors of educational programs by areas of training and formulations of specialties. In this case, for each word vector, the appropriate type of competencies from the Federal State Educational Standard is determined, which correlates with the data obtained with real areas of training. The initial data here are information about the downloaded educational programs, according to the content of the sets of vectors “competence” - “key word or phrase” - “direction or specialty of training”. In the course
Fig. 1. Subsystem for intellectual analysis of educational programs and vacancies
226
M. Deev et al.
of clustering, the meanings of words in the descriptions of competencies with scientific research data increase, on the basis of which the vectors of words are included in clusters divided by specialties. It is noted that it manifests itself as a fuzzy character, so the degree of manifestation of qualification in the cluster can be ranged from 0 to 10, where 10 is a complete match, and 0 is a complete mismatch in the use of words. The goal is to obtain all competency vectors in clusters in areas or specialties. The clusters of vectors used describe the competencies of employers and research programs for the implementation of programs and educational and methodological resources in the information and educational environment. The architecture of the subsystem for intellectual analysis of educational programs and vacancies of employers is shown in Fig. 1.
3 Conclusion The method of collecting, consolidating and intellectual analysis of big data on the requirements of employers for the competencies of specialists is used to solve the problems of actualizating educational programs and adapting them to changing conditions in the labor markets. Based on the method, a system has been developed for searching, consolidating, storing and intellectually analyzing popular vacancies and competencies formed by educational programs. The system has a modular structure, including search robots, a text cleaning module, a keyword selection module, a parsing module, a competency vector presentation module, a competency vector mining module, data storage, etc. Modules solve the problems of collecting, processing and consolidating data using crawler technologies. For a formalized representation, a technique for synthesizing vector descriptions of vacancies is used, which makes it possible to consolidate heterogeneous data for subsequent intellectual comparative analysis. The consolidated data is used to solve the problem of clustering competency vectors and actualizating specialist training programs. The method of intellectual analysis of competency vectors is implemented on the basis of the use of fuzzy logic apparatus and neural networks. The advantage of using a neural network for clustering lies in the high speed of processing competency vectors to detect the use of irrelevant, outdated or unclaimed educational programs. The method allows solving the problems of managing the process of modernization of educational programs and educational content, taking into account the requirements of regional labor markets and employers. The developed models and methods are implemented in the form of a system for searching and analyzing the requirements of educational programs and employers, which is an intelligent mechanism for adaptive adjustment of the information and educational environment, developed and implemented at Penza state university. Funding. The research results were obtained with the financial support of the Russian Science Foundation (RSF) and the Penza region (projects No. 22–21-20100).
References 1. Finogeev, A., Kravets, A., Deev, M., Bershadsky, A., Gamidullaeva, L.: Life-cycle management of educational programs and resources in a smart learning environment. Smart Learn. Environ. 5(1), 1–14 (2018). https://doi.org/10.1186/s40561-018-0055-0
A Method for Collecting and Consolidating Big Data
227
2. Pálsdóttir, A., Jóhannsdóttir, L.: Key competencies for sustainability in university of ice-land curriculum. Sustainability 13(16), 8945 (2021). https://doi.org/10.3390/su13168945 3. Kim, J., Kim, J.: Development and Application of Art Based STEAM Education Program Using Educational Robot. In: I. Management Association (Ed.), Robotic Systems: Concepts, Methodologies, Tools, and Applications, pp. 1675–1687 (2020). IGI Global. https://doi.org/ 10.4018/978-1-7998-1754-3.ch080 4. Bourn, D., Soysal, N.: Transformative learning and pedagogical approaches in education for sustainable development: are initial teacher education programmes in england and Turkey ready for creating agents of change for sustainability? Sustainability 13(16), 8973 (2021). https://doi.org/10.3390/su13168973 5. Herr, D.J.C., et al.: Convergence education—an international perspective. J. Nanopart. Res. 21(11), 1–6 (2019). https://doi.org/10.1007/s11051-019-4638-7 6. Whalley, B.: Towards flexible personalized learning and the future educational system in the fourth industrial revolution in the wake of Covid-19 Brian Whalley. Derek France, Julian Park, Alice Mauchline & Katharine Welsh Higher Education Pedagogies 6(1), 79–99 (2021). https://doi.org/10.1080/23752696.2021.1883458 7. Sharples, M.: The design of personal mobile technologies for lifelong learning. Comput. Educ. 34(3), 177–193 (2000) 8. Vesin, B., Mangaroska, K., Giannakos, M.: Learning in smart environments: user-centered design and analytics of an adaptive learning system. Smart Learn. Environ. 5(1), 1–21 (2018). https://doi.org/10.1186/s40561-018-0071-0 9. Robert, I.V.: Convergent education: origins and prospects. Russ. J. Soc. Sci. Hu-manit., 2(32), 64–76 (2018). https://doi.org/10.17238/issn1998-5320.2018.32.64 10. Deev, M., Gamidullaeva, L., Finogeev, A., Finogeev, A., Vasin, S.: Sustainable Educational Ecosystems: Bridging the Gap between Educational Programs and in Demand Market Skills, In: E3S Web of Conferences, vol. 208, p. 09025 (2020). https://doi.org/10.1051/e3sconf/202 020809025 11. Smitsman, A., Laszlo, A., Luksha, P.: Evolutionary learning ecosystems for thrivable futures: crafting and curating the conditions for future-fit education. World Futures 76(4), 214–239 (2020). https://doi.org/10.1080/02604027.2020.1740075 12. Tetzlaff, L., Schmiedek, F., Brod, G.: Developing personalized education: a dynamic framework. Educ. Psychol. Rev. 33(3), 863–882 (2020). https://doi.org/10.1007/s10648-020-095 70-w 13. Bartolomé, A., Castañeda, L., Adell, J.: Personalisation in educational technology: the absence of underlying pedagogies. Int. J. Educ. Technol. High. Educ. 15(1), 1–17 (2018). https://doi. org/10.1186/s41239-018-0095-0 14. Álvarez, I., Etxeberria, P., Alberdi, E., Pérez-Acebo, H., Eguia, I., García, M.J.: Sustainable civil engineering: incorporating sustainable development goals in higher education curricula. Sustainability 13(16), 8967 (2021). https://doi.org/10.3390/su13168967 15. Bershtein, L.S., Bozhenyuk, A.V.: Estimation of isomorphism degree of fussy graphs. In: M. Wagenknecht, R. Hampel (Eds.), Proceedings of the 3rd Conference of the European Society for Fuzzy Logic and Technology EUSFLAT’2003, Zittau, Germany, pp. 781–784 (2003) 16. Gros, B.: The design of smart educational environments. Smart Learn. Environ. 2016(3), 15 (2016). https://doi.org/10.1186/s40561-016-0039-x6) 17. Huda, M., Haron, Z., Ripin, M.N., Hehsan, A., Yacob, A.C.: Exploring Innovative Learning Environment (ILE): Big Data Era. Int. J. Appl. Eng. Res. 12(17), 6678–6685 (2017) 18. Parisi, G.I., Kemker, R., Part, J.L., Kanan, C., Wermter, S.: Continual lifelong learning with neural networks: a review. Neural Netw. 113, 54 – 71 (2019)
228
M. Deev et al.
19. Finogeev, A., Gamidullaeva, L., Bershadsky, A., Fionova, L., Deev, M., Finogeev, A.: Convergent approach to synthesis of the information learning environment for higher education. Educ. Inf. Technol. 25(1), 11–30 (2019). https://doi.org/10.1007/s10639-019-09903-5 20. Deev, M., Gamidullaeva, L., Finogeev, A., Finogeev, A., Vasin, S.: The convergence model of education for sustainability in the transition to digital economy. Sustainability 13, 11441 (2021). https://doi.org/10.3390/su132011441
Simulation of a Neural Network Model Identification Algorithm Alexander Terekhov1(B) , Evgeny Pchelintsev2 , Larisa Korableva3 , Svyatoslav Perelevsky2 , and Roman Korchagin4 1 National Research University “MPEI”, Krasnokazarmennaya Str. 14, 111250 Moscow, Russia
[email protected]
2 National Research Tomsk State University, Lenin Avenue 36, 634050 Tomsk, Russia 3 Moscow State University, Leninskiye Gory 1, 119991 Moscow, Russia 4 Voronezh State University, Universitetskaya Sq. 1, 394018 Voronezh, Russia
Abstract. An algorithm developed based on a multi-layer neural network with learning is proposed for discriminating deterministic functions in the presence of random distortions and under the conditions of both parametric and non-parametric a priori uncertainty. Statistical simulation methods help to establish its operability and sufficiently high efficiency. The conditions for the expediency of its application in practical applications are stated. #CSOC1120. Keywords: Model identification · Neural network · Non-parametric a priori uncertainty · Maximum likelihood method · Ziegert-Kotelnikov criterion · Average error probability · Simulation
1 Introduction One of the important tasks of modern science and technology is the construction and verification of a mathematical model that describes the relationship of various parameters of the process or phenomenon under study [1, 2]. From the analytical point of view, for setting a model, it is enough to establish the type of functional dependence between the analyzed quantities that determines the structure of the model. The issue of compliance of the real situation with the established dependence is usually solved by experimental or statistical methods. For example, to find out which of the available mathematical dependencies is more consistent with the observed data, one can apply either the Bayesian method [3] or the maximum likelihood method [4, 5]. However, for their practical implementation, these approaches require a certain amount of a priori information, which is often absent. In addition, their application encounters significant difficulties in the presence of nonparametric uncertainty. In this paper, to overcome these limitations, a model identification algorithm is proposed, which is implemented using neural network technologies. For clarity, it is assumed that the task of discrimination is reduced to choosing one of two possible functional dependencies. Statistical simulation methods are used to estimate the performance and efficiency of the introduced algorithm. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 229–236, 2023. https://doi.org/10.1007/978-3-031-35311-6_25
230
A. Terekhov et al.
2 Problem Statement Let f0 (t) and f1 (t) be the distinguishable functions. Then it is assumed that the realization of the random process ξ (t) is available for observation, which is an additive mix of the Gaussian white noise n(t) with the one-sided spectral density (intensity) N0 and one of the functions fi (t), i = 0, 1 introduced above. From the point of view of the statistical decision theory [6], the model identification can be reduced to the problem of discriminating between the signals described by the functions f0 (t) and f1 (t). Thus, the realization of the observed data can be represented as ξ (t) = θf1 (t) + (1 − θ)f0 (t) + n(t).
(1)
Here θ is a discrete parameter that can take two values: θ = 1, if the useful signal is described by the function f0 (t), and θ = 0, if the useful signal is described by the function f0 (t). Based on (1), it is required to decide which of the two signals is present in the observed realization. This is equivalent to choosing a mathematical model that describes the observed data. The a priori probabilities p0 and p1 of the presence of the functions f0 (t) and f1 (t), respectively, are assumed to be known, as well as that p0 + p1 = 1. In the particular case of the equiprobable appearance of f0 (t) and f1 (t), one gets p0 = p1 = 0.5.
(2)
The optimal algorithms for discriminating signals are described and studied in sufficient detail in the well-known works [6, 7]. This makes it possible to compare the discrimination quality achieved when using statistical and neural network approaches. It is noteworthy that the task of discriminating signals can be formulated in terms of testing statistical hypotheses. We call the case the hypothesis H0 when the realization (1) contains the function f0 (t) (with θ = 0), and when the realization (1) contains the function f1 (t) (with θ = 1) it is the hypothesis H1 . Then the probabilities. p0 = P(H0 ) = P(θ = 0), p1 = P(H1 ) = P(θ = 1). are called the a priori probabilities of the corresponding hypotheses. In the particular case when f0 (t) = 0 the discrimination task is reduced to the task of the signal detection against the background of the white noise. Due to the presence of noise, the decision made in favor of one or another hypothesis cannot be error-free. In this regard, to solve the discrimination problem, it is necessary to set an optimality criterion, taking into account what errors may occur. The decision made is denoted by γ. When γ = 0, the decision is made in favor of the hypothesis H0 , and when γ = 1, it is made in favor of the hypothesis H1 . Firstly, the error probabilities that characterize the efficiency of one or another processing algorithm are considered. 1) The type I error probability when a decision is made in favor of the hypothesis H1 , but in fact the hypothesis H0 appears to be fair: α = P( H1 |H0 ) = P( γ = 1 | θ = 0),
(3)
Simulation of a Neural Network Model Identification Algorithm
231
that is, that decision is made that there is the signal f1 (t) in the observed data (1), while the signal f0 (t) is received. In the problem of signal detection, such a situation is called a false alarm, and the quantity α is called the false alarm probability. 2) The type II error probability when a decision is made in favor of the hypothesis H0 , but in fact the hypothesis H1 is fair: β = P( H0 |H1 ) = P( γ = 0 | θ = 1),
(4)
that is, that decision is made that there is the signal f0 (t) in the observed data (1), while the signal f1 (t) is received. In the problem of the signal detection, such situation is described as the signal missing, and the quantity β is the missing probability here. 3) The average error probability: pe = p0 α + p1 β.
(5)
For most of the optimality criteria, the decision algorithm requires the generation of the likelihood ratio or its logarithm L, with its subsequent comparison with the threshold [6, 7]: ⎡ T ⎤ H1 E1 − E0 > 2 ⎣ ξ (t) ( f1 (t) − f0 (t) ) dt ⎦ − ln h. (6) L = ln = N0 N0 < H0
0
T T Here E1 = 0 f12 (t) dt, E0 = 0 f02 (t) dt are the squares of the norms of the discriminated functions, coinciding in meaning with the signal energy, h is the threshold chosen in accordance with the accepted optimality criterion. For example, when using the Ziegert-Kotelnikov criterion [6, 7], (7) h = p0 p1 and while using the maximum likelihood criterion [6, 7], one gets h = 1 etc. In many cases of practical importance, the norms of the discriminated functions are the same, i.e. E0 = E1 = E.
(8)
Then the expression (6) is simplified and takes the form ⎡ L = ln =
2 ⎣ N0
T 0
⎤ ξ (t) ( f1 (t) − f0 (t) ) dt ⎦
H1
> ln h.
> 1 and d is fixed, then one gets the following from (28):
˜ . 2 I0 λq wR˜ λ˜ ≈ λ exp − λ˜ 2 + q2 (29) It is easy to see that the distribution (29) coincides with the distribution (18) of the sufficient statistics (11) when the initial phase of the radio signal (2) is uniformly distributed.
The Narrow-Band Radio Signal Reception Features
243
In the case when q → ∞ and the Bessel function asymptotics is applied (5), one arrives at (29) with Gaussian probability density and the parameters ∼ N (q, 1):
2 √ 2 ˜ 2 ≈ exp − λ˜ − q 2 wR˜ λ → λ˜ 2πq exp − λ˜ − q 2π (30) that appears to be the probability density of the sufficient statistics (11) when the radio signal (2) with the a priori known initial phase is received. Now, presupposing that μ ˜ >> q in (28), one gets the following approximation for this probability density of the sufficient statistics
(0) 2˜q2 I0 zd q˜ 2 w ˜ (z) ≈ z q˜ 2 exp − z 2 + d 2 R
or
2 I0 λ˜ μ wR0˜ λ˜ = λ˜ exp − λ˜ 2 + μ ˜ . ˜2
(31)
From (31), it follows that the distribution of the sufficient statistics is nondegenerate at q → 0. Since the choice of the amplitude A˜ of the reference signal is arbitrary, one can set that A˜ = 1 or take it such that there will be q˜ = 1. In the latter case, the form of magnitude χ˜ (26) is somewhat simplified and the distribution of the sufficient statistics (27), (28) contains only the essential parameters (q, d , ϕ0 ). It is easy to see that at d → 0 the probability density (31) becomes the Rayleigh one, as in the case of a uniform a priori distribution of the initial phase and the absence of a radio signal (2) in the observed data (1). At d >> 1, from (31) one obtains the normal probability density (30) corresponding to the reception of the known (deterministic) radio signal. Thus, the transition to (25), on the one hand, makes it possible to eliminate the degeneracy of the distribution of the sufficient statistics at q → 0, and, on the other hand, in the limiting cases, i.e. when q = 0, q >> 1, q → ∞, it leads to the known distributions that describe the corresponding situations in the case of optimal reception.
6 Characteristics of Detection and Discrimination Based on the distributions found, one can now write down the characteristics of the detection of the radio signal (2) with a random non-uniformly distributed phase as well as the characteristics of the discrimination of M orthogonal radio signals si (t, ϕ0i ) with the unknown initial phases ϕ0i while using modified sufficient statistics (25). There will be assumed that the a priori probabilities pi of the appearances of the radio signals si (t, ϕ0i ) in the observed data (1) and the energy Ei of radio signals si (t, ϕ0i ) are the same: pi = 1 M , Ei = E, i = 1, M , while the initial phases ϕ0i , i = 1, M are described by the same distribution laws (4). The decision rule of detection takes the form 1
> h. R˜ < 0
244
A. Gudkov et al.
That is, if the threshold h selected in accordance with the accepted optimality criterion is exceeded, it is assumed that the radio signal (2) is present in the realization (1), otherwise it is assumed that it is absent. To discriminate between M orthogonal equiprobable radio signals with the same energies and unknown initial phases, one uses the decision rule of the form [2] max R˜ i = R˜ l → sl (t, ϕ0l ),
i=1,M
where R˜ i is determined from (25) when replacing s(t, ϕ0 ) by si (t, ϕ0i ). Thus, the discriminator makes the decision in favor of the presence of the signal sl (t, ϕ0l ) in the realization of the observed data when the sufficient statistics R˜ l (25) for this signal is maximal. As the detection characteristics, the Type I (the false alarm) α and Type II (the missing signal) β error probabilities are selected, and as the discrimination characteristic, the average discrimination error probability Pe serves. Following the definition [1–4], one gets
α = P R˜ > h x(t) = η(t) , β = P R˜ < h x(t) = s(t, ϕ0 ) + η(t) , Pe =
M i=1
pi
M j=1,j =i
M M 1 P sj si = P sj si . M
(32)
i=1 j=1,j =i
Here P sj si is theprobability of making a decision in favor of the presence of the radio signal sj t, ϕ0j in the observed data, while the radio signal si (t, ϕ0i ) is actually being received. Using (31), for the false alarm probability one can write down (0) R
(0) R
α = 1 − F ˜ (h), F ˜ (h) =
h 0
(0) ˜ λ R
w˜
˜ dλ.
(33)
The missing signal probability, in accordance with the definition (32), is then determined as
h β=
˜ wR˜ λ˜ dλ,
(34)
0
where wR˜ λ˜ is in its turn determined based on (28). The average discrimination error probability (32) for M equiprobable signals with the same energies has been found in [2]. According to [2], at ϕ0i = ϕ0 , i = 1, M , one gets
∞ Pe = 1 −
(0) M −1 ˜ wR˜ λ˜ F ˜ λ˜ dλ, R
0
(35)
The Narrow-Band Radio Signal Reception Features
245
(0) where wR˜ λ˜ and F ˜ λ˜ are determined based on (28) and (33), respectively. R In Fig. 1, the false alarm probability α = α(h) is plotted for the case when the radio signal (2) with an unknown initial phase is detected. The curves 1, 2, 3 are calculated by the formula (33) at d = 0.5, d = 2, d = 5, respectively.
Fig. 1. The false alarm probability for the different widths of the a priori distribution of the initial phase of the radio signal
Fig. 2. The missing signal probability of in the case of the uniform and non-uniform a priori distribution of the initial phase of the radio signal
Fig. 3. The missing signal probability of at different widths of the a priori distribution of the initial phase of the radio signal.
Fig. 4. The average probability of discrimination error in the case of the equiprobable orthogonal radio signals
In Fig. 2, there are demonstrated the dependencies β = β(q) (34) of the missing signal probability on the SNR q (10). The curve 1 corresponds to the case of the uniformly distributed initial phase (d = 0), and the curve 2 – to the case of a moderately unevenly distributed initial phase (d = 0.5) when b = 0, ϕ0 = 0. In Fig. 3, the missing signal probabilities (34) for the different phase distribution widths d = 0.5 (the curve 1), d = 2 (the curve 2), d = 5 (the curve 3). For definiteness, it is assumed that b = 0, ϕ0 = 0. When constructing the dependencies β(q) (34) shown in Figs. 2, 3, the detection threshold h is determined from (33) based on the Neyman-Pearson criterion and at the accepted level of the false alarm probability α = 0.01.
246
A. Gudkov et al.
In Fig. 4, one can see the dependences of the average error probability of discriminating between the M = 4 equiprobable orthogonal signals on the SNR q (10) and the different widths of the a priori phase distribution. The curves 1, 2, 3 are all calculated by the formula (35), but the curve 1 at b = 0, ϕ0 = 0 and d = 0 (the case of a uniformly distributed initial phase), and the curves 2 and 3 – for the cases when d = 0.5 and d = 2, respectively. From the Figs. 1, 2, 3 and 4, the following conclusions can be drawn: 1) a priori phase non-uniformity significantly affects the distribution of the sufficient statistics in the absence of a useful signal and, accordingly, the value of the detection threshold for the specified value of the false alarm probability; 2) a priori phase non-uniformity somewhat decreases the missing signal probability (when detecting a radio signal) and the average error probability (when discriminating the orthogonal equiprobable radio signals); 3) with an increase in the non-uniformity of the phase distribution, the missing signal probability (when detecting a radio signal) and the average error probability (when discriminating the orthogonal equiprobable radio signals) decrease rather weakly; 4) the values of the missing signal probability (when detecting a radio signal) and the average error probability (when discriminating the orthogonal equiprobable radio signals) practically cease to depend on the value of d when d ≥ 2.
7 Conclusion To calculate the detection and discrimination characteristics in the case of the radio signals with a random initial phase, instead of the sufficient statistics generated by the optimal receiver, which is degenerate at the vanishingly small amplitude of the useful signal, it is advisable to consider the modified sufficient statistics, which is nondegenerate for any signal-to-noise ratio. Based on it, the calculations of the detection characteristics when the radio signal has a random initial phase as well as the calculations of the discrimination of the equiprobable orthogonal radio signals demonstrate that a priori phase non-uniformity significantly affects the false alarm probability value (and, accordingly, the detection threshold value). In addition, an increase in the non-uniformity of the a priori phase distribution results in a slight decrease in the values of the missing signal probability (when detecting a radio signal) and the average error probability (when discriminating the orthogonal equiprobable radio signals). However, at the values of the non-uniformity parameter greater than 2, the dependence of the missing signal probability and the average error probability on this parameter practically disappears (further increase in the degree of non-uniformity of the distribution of the initial phase has practically no effect on the values of these probabilities). The results obtained make it possible to determine the efficiency of the optimal (in one sense or another) algorithm for detecting a radio signal or discriminating radio signals under the conditions of a priori uncertainty about the initial phases in each specific case. Acknowledgements. The work was supported by the Ministry of Education and Science of the Russian Federation (research project No. FSWF-2023–0012).
The Narrow-Band Radio Signal Reception Features
247
References 1. Kulikov, E.I., Trifonov, A.P.: Estimation of Signal Parameters Against Hindrances. Sovetskoe Radio, Moscow (1978). (in Russian) 2. Trifonov, A.P., Shinakov, Y.S.: Joint Discrimination of Signals and Estimation of Their Parameters Against Hindrances. Radio i Svyaz’, Moscow (1986). (in Russian) 3. Van Trees, H.L., Bell, K.L., Tian, Z.: Detection, Estimation, and Modulation Theory, Part I. Detection, Estimation, and Filtering Theory, 2nd edn. Wiley, New York (2013) 4. Proakis, J.G., Salehi, M.: Digital Communications, 5th edn. McGraw-Hill Higher Education, New York (2007) 5. Meyr, H., Ascheid, G.: Synchronization in Digital Communication, Volume I, Phase-, Frequency-Locked Loops, and Amplitude Control, 1st edn. Wiley, New York (1990) 6. Parkinson, B.W., Spilker, J.J. (eds.): Global Positioning System: Theory and Applications, vol. I. American Institute of Aeronautics and Astronautics, Reston (1996) 7. Akaiwa, Y.: Introduction to Digital Mobile Communication, 2nd edn. Wiley, New York (2015) 8. Abramowitz, M., Stegun, I.A. (eds.): Handbook of Mathematical Functions with Formulas, Graphs and Mathematical Tables, vol. 55. National Bureau of Standards. Applied Mathematics Series, USA (1964) 9. Farag, A.: Introduction to Probability Theory with Engineering Applications. Cognella Academic Publishing, San Diego (2021)
Asymptotic Exact Formulas for Characteristics of the Joint Maximum Likelihood Estimates Under a Partial and Complete Violation of the Regularity Conditions of the Decision Determining Statistics Oleg Chernoyarov1,2(B) , Alexander Zakharov3 , Larisa Korableva4 , and Kaung Myat San2 1 National Research Tomsk State University, Lenin Avenue 36, 634050 Tomsk, Russia
[email protected]
2 National Research University “MPEI”, Krasnokazarmennaya Str. 14, 111250 Moscow, Russia 3 Voronezh State University, Universitetskaya Sq. 1, 394018 Voronezh, Russia 4 Moscow State University, Leninskiye Gory 1, 119991 Moscow, Russia
Abstract. The aim of this study is to obtain general asymptotic exact formulas for the characteristics of the maximum likelihood estimates of the signal parameters under various conditions of the decision determining statistics regularity, which makes it possible to analytically determine the efficiency of the synthesized estimation algorithm. Three possible cases are considered: when the decision determining statistics is differentiable with respect to all unknown parameters (the regular case); when the decision determining statistics is not differentiable with respect to any of the unknown parameters (the singular case); and when the decision determining statistics is differentiable with respect to some of the parameters but non-differentiable with respect to the others (the case of partial violation of the regularity of the decision determining statistics). In the first case, the formulas for the characteristics of the estimates are obtained based on a multidimensional generalization of the small parameter method; in the second case, a generalization of the local Markov approximation method by means of an additivemultiplicative representation of the moments of the decision determining statistics is applied; in the third case, a generalization of the two indicated approaches is carried out. The characteristics of the estimates to be passed in all considered cases are established. #CSOC1120 Keywords: Random process · Maximum likelihood estimate · Regular parameter · Discontinuous parameter · Small parameter method · Local Markov approximation method · Local additive approximation method · Bias of estimate · Variance of estimate
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Silhavy and P. Silhavy (Eds.): CSOC 2023, LNNS 722, pp. 248–272, 2023. https://doi.org/10.1007/978-3-031-35311-6_27
Asymptotic Exact Formulas for Characteristics of the Joint Maximum Likelihood
249
1 Introduction In various fields of physics and engineering, there is a problem of measurement (estimation) of the parameters of the information signals or images observed against random interferences. The signal parameters estimation problem is of particular importance in radio physics, radio astronomy, hydro acoustics, seismology and geology, as well as in many radio engineering applications, such as radio communication, radio control, telemetry, radiolocation and radio navigation. Technical diagnostics and process control are also the fields where this problem is a challenge. One of the most efficient methods for synthesizing the algorithm for estimating the parameters of the signals against interferences is the maximum likelihood method [1–5]. Application of the maximum likelihood method allows obtaining relatively simple and rather effective algorithms for the estimation of the signal parameters. A special advantage of such algorithms is that they require minimum amount of prior information. However, the final conclusion about the usefulness of the maximum likelihood estimates (MLEs) for the solution of certain practical tasks should be made only after the estimate characteristics have been analyzed. The methods for calculating the characteristics of the joint estimates of signal parameters, including their joint MLEs, gain special importance in this context. The appropriateness of particular methods for calculating the characteristics of the joint estimates of signal parameters significantly depends upon the analytical properties of the decision determining statistics of the examined estimation algorithm. Specifically, when analyzing the accuracy of MLEs, the focus is on the regularity of the logarithm of the functional of the likelihood ratio (FLR) as the function of the estimated signal parameters, which is the decision determining statistics of the estimation algorithm in this case [4–7]. Gaussian logarithm of FLR is regular by the set parameter, if, at least, there are continuous second derivatives of the first two moments of the logarithm of FLR with respect to all the unknown parameters [4–7]. The signal parameters that satisfy these conditions of regularity are termed the regular ones [4, 5]. By applying the small parameter method [3, 5], one can find the asymptotically exact (with an increase in the signal-to-noise ratio (SNR)) expressions for the characteristics of the joint MLEs of the regular parameters of the signal observed against interferences. However, there is a wide class of signals commonly known as discontinuous [4–6], or non-analytical [8]. For certain parameters of discontinuous signals, the conditions of regularity for the logarithm of FLR are not satisfied. Following [5, 9], one calls such parameters the discontinuous ones. The simplest examples of the discontinuous parameters are the time of arrival and the duration of rectangular video and radio pulses, the frequency of the narrow-band radio signal with a uniform spectrum in a limited frequency band, the time delay of some discrete complex signals, etc. [5, 8, 10–12]. The small parameter method [3, 5] cannot be applied to find the MLEs characteristics of the discontinuous parameters, because it presupposes the regularity of the logarithm of FLR. Otherwise, one may get, for example, a zero value of the discontinuous parameter estimate dispersion and a number of other incorrect results. In order to calculate the asymptotically exact (with an increase in SNR) expressions for the characteristics of the MLEs of the discontinuous signal parameter, the local Markov approximation (LMA) method can be used [5, 7, 12]. The idea of the LMA method is to approximate the logarithm of FLR, or its increments, by Markov or local
250
O. Chernoyarov et al.
Markov random process. Then, by applying the mathematical apparatus of the Markov processes theory, one can obtain the asymptotically exact expressions for the characteristics of the MLEs of the discontinuous signal parameters. However, the LMA method is applicable for calculating the characteristics of MLE of only one of discontinuous signal parameters. It is presupposed that in this case all other discontinuous signal parameters are a priori known. In practice, signal processing is implemented, as a rule, under the conditions of prior uncertainty when one is unaware of both the informative signal parameters to be estimated and some other (non-informative) parameters. This leads to a need to carry out a joint estimation of a number of unknown signal parameters, among which some may be discontinuous. The LMA method appears inapplicable in calculation of the characteristics of the discontinuous parameter joint estimates. In [9], the procedure is proposed that makes it possible to calculate the asymptotic (with an increase in SNR) characteristics of the joint MLEs of a single discontinuous and several regular parameters. Still, the results obtained in [9] are relevant only in the case of quasi-deterministic signal parameter estimate. Joint estimate characteristics these papers present are inapplicable in a more general case when the processed signal is random by its substructure [13]. Besides, the results from [9] cannot help in a calculation of the joint MLE characteristics when these MLEs are of the several discontinuous and several regular signal parameters. Thus, the universal methods for obtaining the analytical expressions for the characteristics of the joint MLEs of the several discontinuous signal parameters are still unknown. Neither are known the universal methods for calculating the characteristics of joint MLEs of a number of discontinuous and one or more regular parameters. At present, the characteristics of the joint estimates of the discontinuous parameters have been obtained for certain special applications only. The said approach [9] to calculating the characteristics of the joint MLEs of a single discontinuous and several regular parameters requires a further generalization embracing a wider class of signals with the random substructure [13]. That is why it is difficult to analyze the performance of information systems when using the discontinuous signal models. The specified difficulties faced while calculating the characteristics of the joint MLEs of the discontinuous signal parameters can be overcome, if the moments of the logarithm of FLR allow for an additive-multiplicative representation [7]. In this case, the mathematical expectation, the correlation function and some other moments of the logarithm of FLR are expressed as the sums of the finite number of summands, each of which is the product of the functions of a one parameter only. Then, to find the asymptotically exact (with an increase in SNR) expressions for the characteristics of the joint MLEs of the discontinuous signal parameters, one can apply the local additive approximation (LAA) method that is considered below. The LAA method allows reducing the problem of calculating the characteristics of the joint estimates of the discontinuous signal parameters to the simpler problem of finding the characteristics of the separate estimates of the corresponding parameters. At that, to get the characteristics of the separate MLEs of the discontinuous signal parameters, one can return to the LMA method taking into account all necessary generalizations.
Asymptotic Exact Formulas for Characteristics of the Joint Maximum Likelihood
251
The use of the LAA method also makes it possible to generalize the technique [9] to calculating the characteristics of the joint MLEs of one discontinuous and several regular parameters for the case of a joint estimation of an arbitrary number of discontinuous and regular signal parameters. For this, it is necessary that the moments of the logarithm of FLR, as a function of discontinuous signal parameters, allow for an additive-multiplicative representation. It is noticeable that an additive-multiplicative representation of the moments of the logarithm of FLR is possible for a wide class of parameters of various physical signals. For example, when estimating the parameters of the stochastic Gaussian pulse [13] occurring in radio and hydrolocation, communications, radio astronomy, etc., such representation of the moments of the logarithm of FLR is carried out by time and frequency parameters of the pulse. These parameters include time of arrival, duration, moments of appearance and disappearance of the pulse as well as band center and bandwidth of the spectral density of its random substructure. In Sect. 2 of the present study, one can find the asymptotically exact (with an increase in SNR) expressions for the characteristics of the joint MLEs of the regular signal parameters [3, 5]. In Sect. 3, the results of an application of the LAA and LMA methods are presented: the asymptotic characteristics of the joint MLEs of the discontinuous signal parameters in the presence of the additive-multiplicative representation of the moments of the logarithm of FLR [5, 7]. In Sect. 4, the generalization of the obtained results is performed for the case of the joint MLEs of the discontinuous and regular signal parameters in the presence of the additive-multiplicative representation of the moments of the logarithm of FLR for the discontinuous parameters.
2 The Characteristics of the Joint Estimates of the Regular Signal Parameters 2.1 The Maximum Likelihood Estimates of the Signal Parameters and the Local Representations of the Moments of Decision Determining Statistics It is presupposed that during the time interval t ∈ [0, T ] the signal s(t, 0 ) which is characterized by the information parameter vector 0 = 01 , 02 , . . . , 0r arrives at the receiver input. The signal parameters 01 , 02 , …, 0r are unknown and take the values from the a priori r-dimensional domain . The signal s(t, 0 ) is observed against Gaussian noise (random interferences) n(t), so that only a random mix x(t) = s(t, 0 )⊗n(t) of signal and noise can be processed. Based on the realization of the observed data x(t) and an a priori information, one should measure (estimate) the parameters 0i , i = 1, r of the received signal s(t, 0 ). According to the maximum likelihood method [1–5], to obtain joint estimates (measurements) of the r unknown parameters 01 , 02 , …, 0r , one should build the FLR M () or the logarithm of FLR L() = ln M () as the function of the current values = 1 , 2 , . . . , r of the estimated parameters 0 . Then, as it is demonstrated in [1–5, 10], the joint MLEs 1m , 2m , …, rm of the parameters 01 , 02 , …, 0r are calculated as the coordinates of the position of the absolute (greatest) maximum of the
252
O. Chernoyarov et al.
FLR M () = M (1 , 2 , . . . , r ) or the logarithm of FLR L() = L(1 , 2 , . . . , r ) within the a priori range of values ∈ : (1m , 2m , . . . , rm ) = arg sup M (1 , 2 , . . . , r ) = arg sup L(1 , 2 , . . . , r ). ∈
∈
(1a) The vector of the joint MLEs m = 1m , 2m , . . . , rm of the signal parameters can be written compactly as in [1–5]: m = arg sup M () = arg sup L(). ∈
∈
(1b)
One can find the expression for the FLR (or for the logarithm of FLR) based on the specified probabilistic (statistical) description of the signal and noise. . . , xN made of the observed data x One starts with the sampling X = x1 , x2 , . (t) at the time moments ti = i where = T N is the discretization step. In this case, W ( X|) is the conditional density of the sampling X, if the vector of the unknown parameters of the received signal is equal to , while W0 (X) is the conditional density of the sampling when there is no signal in the observed data. The conditional density W ( X|) considered as the function of the current values of the parameter vector , is termed the likelihood function, and the ratio () = W ( X|) W0 (X) is termed the likelihood ratio [1–5]. Now it is time to turn to the analysis of the signals in the continuous time when a continuous realization x(t) of the observed data is being processed. Then, instead of the likelihood ratio, the limit is set: () = lim W ( X|) W0 (X) , (2) M () = lim →0,N →∞
→0,N →∞
which is calculated at the constant value of N = T . The functional M () (2) is termed the FLR, and its logarithm L() = ln M () is the logarithm of FLR. According to the definition (2), FLR M () and the logarithm of FLR L() characterize the probability density of the values of the received signal parameters at the specified realization of the observed data x(t). The efficiency of MLEs is determined by the statistical features of the logarithm of FLR. While analyzing the MLEs characteristics, one can present the logarithm of FLR as the sum L() = S() + N () where S() = L() is a signal function (a deterministic component), and N () = L() − L() is a noise function (fluctuating component) of the logarithm of FLR. Here · means averaging over all the possible realizations of the observed data (over the realizations of the logarithm of FLR) at the fixed real values 0 = 01 , 02 , . . . , 0r of the signal estimated parameters [3, 5, 7]. Our notations are: AS = max S() is an absolute maximum of the signal function within the domain ∈ and σN2 = N 2 (0 ) is the noise function dispersion when = 0 . If the generality of the analysis results is not reduced, it is assumed that AS > 0. Due to the fact that MLEs are invariant to the FLR logarithm normalized by a constant value, the normalized logarithm of FLR is introduced: (3) Ln () = L() AS .
Asymptotic Exact Formulas for Characteristics of the Joint Maximum Likelihood
253
The functional (3) can be represented as the sum Ln () = Sn () + εNn (),
(4)
where ε = σN AS , while Sn () = S() AS and Nn () = N () σN are the signal and noise functions, normalized so that max Sn () = 1, Nn2 (0 ) = 1. Following [1–11], the logarithm of FLR L() is considered to be a Gaussian random field [2, 14]. Then, to calculate the characteristics of the joint MLEs (1) and accounting for the Gaussian nature of the logarithm of FLR, our task should be limited to the study of the first two moments of the logarithm of FLR that are the signal function S() and 2 ) = N (1 )N (2 ) of the noise function N (). the correlation function K(1 , Here j = j1 , j2 , . . . , jr , j = 1, 2. And instead of the functions S() and K (1 , 2 ), one can deal with the normalized signal function Sn() (4) and correlation function Kn (1 , 2 ) = Nn (1 )Nn (2 ) = = K(1 , 2 ) σN2 of the normalized noise function Nn (). Within the a priori domain ∈ , the signal function S() (and also Sn (), consequently) is presupposed to have the only maximum in the point = 0 of the real values of the estimated parameters of the signal that is the inner point of the definitional domain . Thus, S() = AS > 0. Then the realizations of the noise function N () (and also Nn (), consequently) are the continuous ones with probability 1. In practice, as a rule, these conditions hold [3–5]. Then, the output voltage SNR z for the estimation algorithm (1) is determined as [3, 5, 7, 12]:
(5) z = S(0 ) N 2 (0 ) = AS σN = 1 ε. In practice, any measurer is feasible to use only when the output SNR is high, as it provides a high estimate efficiency. Still, at the device input SNR may be low. Further, it is presupposed that SNR (5) is high enough, so that there are no anomalous errors [3, 5], and a posteriori estimate accuracy is reached [3, 5, 7]. At the same time, MLEs m (1) are located in a small neighborhood of the point = 0 of the signal function maximum, while the estimates m converge to 0 in mean square [3–5], if z goes to infinity. Then, to determine the MLE characteristics, it suffices to study the behavior of the signal function Sn () and the correlation function Kn (1 , 2 ) of the noise function Nn () in a small neighborhood of the point = 0 . It is noticeable that an increase in SNR z results in the diminishing of this neighborhood. In a small neighborhood of the point = 0 , the local representations of the first two moments of the logarithm of FLR must be specified when determining the signal parameters. It is presupposed that the parameters i , i = 1, r are the regular ones, i.e. the moments of the logarithm of FLR are continuously differentiated by these parameters [3–5, 9]. Consequently, at δθ = max | i − 0 | → 0, the normalized signal and i=1, 2, ..., r
254
O. Chernoyarov et al.
noise functions can be represented as the expansion in r-dimensional Taylor series: r r
1
Sn () = S0 + Sij (i − 0i ) j − 0j + o δθ2 , 2 i=1 j=1
Nn () = N0 +
r i=1
Ni (i − 0i ) +
r r
1
Nij (i − 0i ) j − 0j + o δθ2 , 2 i=1 j=1
(6) where o(δ) denotes the higher-order infinitesimal terms compared with δ, S0 = Sn (0 ) , Sij
= ∂ 2 Sn () ∂i ∂j , N0 = Nn (0 ) , =0 . Ni = ∂Nn () ∂i = , Nij
= ∂ 2 Nn () ∂i ∂j 0
(7)
=0
In (6), it is taken into account that the signal function Sn () is continuously differentiated and reaches its maximum in the point = 0 and that results in Si = ∂Sn () ∂i = = 0. By using (6), one can also write the expansion of the 0 correlation function Kn (1 , 2 ) at δθ → 0. 2.2 The Application of the Small Parameter Method for the Calculation of the Characteristics of the Regular Parameter Joint Estimates Our task now is to study the asymptotically exact (with an increase in SNR (5)) expressions for the characteristics of the joint MLEs of the regular parameters 0i , i = 1, r. As the realizations of the normalized logarithm of FLR Ln () is continuously differentiated by the variables i , i = 1, r, the MLEs im , i = 1, r of the regular signal parameters can be found in the solution of the combined likelihood equations [3–5]: ∂Ln () ∂i = = 0, i = 1, 2, . . . , r, (8) m if the conditions ∂ 2 Ln () ∂2i = < 0, i = 1, r hold. Accounting for (4), the m combined Eqs. (8) can be rewritten as ∂Sn () ∂i + ε∂Nn () ∂i = = 0, i = 1, 2, . . . , r. (9) m Approximate solution for the combined Eqs. (9) at the high SNR z (5) can be found using the small parameter method [3, 5]. As the signal function Sn () reaches its maximum at the point = 0 , the solution for the combined Eqs. (9) coincides with the real values of the estimated parameters, i.e. im = 0i , i = 1, r, if ε = 0 (a zero approximation). When a posteriori estimate accuracy is high, the estimates im , i = 1, r (1) are located in a small neighborhood of the real values 0i of the estimated parameters. And, with an increase in SNR z, the value δθ of this neighborhood diminishes. Therefore, with
Asymptotic Exact Formulas for Characteristics of the Joint Maximum Likelihood
255
a higher SNR z, the solution for the set of Eqs. (9) can be found as the power series in the small parameter ε = 1 z 0 and the point l = l0 is internal point of the domain l , while the realizations of the noise function N (l) are continuous with probability l. In practice, these conditions are usually satisfied [3–5, 11]. Then the output voltage SNR for the estimation algorithm (16a) can be written in the following way: N 2 (l0 ) = AS σN , (17) z = S(l0 ) where σN2 = N 2 (l0 ) is the dispersion of the noise function when l = l0 . Let SNR (17) is so high that the anomalous errors [3, 5] are absent and a high a posteriori estimate accuracy occurs [3, 5, 7]. In this case, MLEs lm (16a) are located in a small neighborhood of the point l = l0 of the signal function maximum, while the estimates lm converge to l0 in mean square [3–5], if z goes to infinity. Then, to determine the MLE characteristics, it suffices to study the behavior of the signal function S(l) and the correlation function K (l1 , l2 ) of the increments of the logarithm of FLR L(l) in a small neighborhood of the point l = l0 . It is noticeable that an increase in SNR z results in the diminishing of this neighborhood. Thus, in a small neighborhood of the point l = l0 , the local representations of the first two moments of the logarithm of FLR must be specified when estimating the discontinuous signal parameters. Now one focuses on the class of the discontinuous parameters for which the sections Si (li ) = S(l) | lk =l0k , k=1, 2, ..., p, k =i of the signal function by each of the parameters in the neighborhood of the point l = l0 of the signal function maximum allow for the asymptotic representations 1 − d1i | li − l0i | + o(δi ) , li < l0i , (18) Si (li ) = AS 1 − d2i | li − l0i | + o(δi ) , li ≥ l0i . At the same time, the corresponding sections Ki (l1i , l2i ) = K (l1 , l2 ) | ljk =l0k , k=1, 2, ..., p, k =i, j=1, 2 of the correlation function K (l1 , l2 ) of the increments
258
O. Chernoyarov et al.
∗ of the logarithm ∗ of FLR are represented in the following way at δni = max | l1i − l0i | , | l2i − l0i | , li − l0i → 0: ∗ B1i min l1i − li∗ , l2i − li∗ + C1i + o δni , l1i , l2i < l0i , ∗ Ki (l1i , l2i ) = (19) ∗ ∗ B2i min l1i − l , l2i − l + C2i + o δ , l1i , l2i ≥ l0i , i
∗
i
ni
if l1i − li l2i − li ≥ 0, and Ki (l1i , l2i ) = 0, if l1i − li∗ l2i − li∗ < 0. Here dki > 0, Bki > 0, Cki ≥ 0, k = 1, 2, i = 1, p, with the constants Cki that may depend on the chosen values of li∗ . From (18) and (19) one can see that the moments of the logarithm of FLR cannot be differentiated by the discontinuous parameters at the point l = l0 , because the derivatives of the functions (18), (19) have the discontinuity of the first kind at it. This makes it impossible to apply the small parameter method [3, 5] for calculating the characteristics of the estimates of the discontinuous parameters of a signal. Some examples of the correlation functions satisfying the expression (19) are presented in [7, 12]. LAA method is applicable, if the signal function S(l) of the logarithm of FLR in the neighborhood of the point l = l0 allows for the additive-multiplicative representations S(l) =
∗
ak u
t(j+1)k
k=1 j=1 i=tjk +1
Vki (li ), K(l1 , l2 ) =
αk v
θ(j+1)k
k=1 j=1 i=θjk +1
Uki (l1i , l2i ),
(20)
where 0 = t1k < t2k < . . . < t(ak +1)k = p, 0 = θ1k < θ2k < . . . < θ(αk +1)k = p. Here the derivatives of the functions Vki (li ), Uki (l1i , l2i ), i = 1, p are discontinuous to the right and to the left from the point l0i , but they may have the discontinuities of the first kind at this point. In the particular case when ak = αk = 1 one gets from (20) S(l) =
p u
p v
Vki (li ), K(l1 , l2 ) =
k=1 i=1
Uki (l1i , l2i ).
(21)
k=1 i=1
If ak = αk = 1 and u = v = 1, then the moments (20) factorize by the current values of the estimated signal parameters and the multiplicative representation becomes possible S(l) =
p
V1i (li ) =
i=1
p
Si (li ), K(l1 , l2 ) =
i=1
p i=1
U1i (l1i , l2i ) =
p
Ki (l1i , l2i ),
i=1
where Si (li ) = V1i (li ) and Ki (l1i , l2i ) = U1i (l1i , l2i ) are the sections of the signal and correlation functions of the noise function of the logarithm of FLR, respectively. Finally, when ak = αk = p the additive-multiplicative representation (20) of the moments of the logarithm of FLR transforms into the additive representation S(l) = K(l1 , l2 ) =
p u
Vki (li ) =
p
Si (li ) − (p − 1)AS , i=1 k=1 i=1 p p v Uki (l1i , l2i ) = Ki (l1i , l2i ) − (p − 1) σN2 . i=1 k=1 i=1
Asymptotic Exact Formulas for Characteristics of the Joint Maximum Likelihood
259
3.2 The Application of the Local Additive Approximation Method for Calculating the Characteristics of the Joint Estimates of the Discontinuous Parameters Firstly, the asymptotically exact (with an increase in SNR (17)) expressions should be found for the characteristics of the joint MLEs (16a) of the discontinuous parameters l0i , i = 1, p in the presence of the additive-multiplicative representation (20) of the moments of the logarithm of FLR. It is presupposed that SNR (17) is so high that a high a posteriori estimate accuracy is reached [3, 5, 7], and to calculate the characteristics of MLEs one needs to consider only the local behavior of the moments of the logarithm of FLR L(l) in a small neighborhood of the point l0 . The functions Vki (li ) and Uki (l1i , l2i ) are now expanded in the Taylor series to the left and to the right of the points l0i . By substituting these expansions into (20) and taking into account only the summands of the first order of smallness by δi (δni ), one gets S(l) =
p
Si (li ) − (p − 1)AS + o(δ), K(l1 , l2 ) =
i=1
p i=1
Ki (l1i , l2i ) − (p − 1) σN2 + o(δn ), (22)
when δ =
max
i=1, 2, ..., p
δi → 0, δn =
max
i=1, 2, ..., p
δni → 0, δni = max( | l1i − l0i | , | l2i − l0i | ).
As a result, the moments of the logarithm of FLR L(l) allow for an local additive representation (22) in the neighborhood of the point l0 . Now one turns to Mi (li ), i = 1, p that are the statistically independent joint Gaussian random processes whose mathematical expectations SMi (li ) and the correlation functions KMi (l1i , l2i ), in the neighborhoods of the points li = l0i , can be represented as SMi (li ) = Si (li ) − (p − 1)AS p, KMi (l1i , l2i ) = Ki (l1i , l2i ) − (p − 1) σN2 p. (23) Here the functions Si (li ) and Ki (l1i , l2i ) satisfy the conditions (18) and (19). As a result, according to (22), the random field L(l) converges in distribution to the sum M (l) = p Mi (li ) of the statistically independent random processes Mi (li ) at δ → 0. As it has i=1
been mentioned above, the characteristics of MLEs (16a) at high SNR z are determined by the behavior of the logarithm of FLR L(l) in the neighborhood of the point l = l0 , and at z → ∞ the value of this neighborhood approaches zero. In this case, one assumes that SNR z is so high and the values δi of the specified point neighborhoods li = l0i are so low that the representations (23) of the moments of the random processes Mi (li ) are li ∈ [ l0i − δi , l0i + δi ]. Then the joint probability density W
true within the intervals l1m , l2m , . . . , lpm of the estimates l1m , l2m , …, lpm (16a) can be approximated by the product
Wi (li ) W l 1 , l 2 , . . . , lp = p
(24)
i=1
of the probability densities Wi (li ) of the separate estimates lig =
arg sup
li ∈[ l0i −δi , l0i +δi ]
Mi (li ), i = 1, 2, . . . , p.
(25)
260
O. Chernoyarov et al.
Here δi is the value of the neighborhood of the point li = l0i , with δi → 0 at z → ∞.The accuracy of the representation (24), (25) increases with SNR z. As a result, the characteristics of the joint MLEs lim (16a) of the discontinuous parameters l0i asymptotically (with an increase in SNR) coincide with the corresponding characteristics of the separate estimates lig (25) of the same parameters. To find the characteristics of the separate estimates lig , i = 1, p of the discontinuous signal parameters the LMA method is applied [5, 12]. For this purpose, Gaussian random processes i (li ) = Mi (li ) − Mi li∗ , li∗ ∈ [ l0i − δi , l0i + δi ] are introduced, with the mathematical expectations Si (li ) − Si li∗ and the correlation functions Ki (l1i , l2i ). From (19) it follows that the segments of realizations of the random processes i (li ) at the intervals l0i − δi , li∗ and li∗ , l0i + δi are not correlated and, therefore, are statistically independent, due to the Gaussian nature of the processes i (li ).Then the function of the estimate distribution (25) can be represented as in [5, 12]:
∞
Fi (li ) = P lig < li =
P2i (u) dP1i (u),
(26)
0
where P1i (u) = P
sup
li ∈[ l0i −δi , li∗ )
i (li ) < u and P2i (u) = P
sup
li ∈[ li∗ , l0i +δi ]
i (li )